Sample records for simple scaling factor

  1. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  2. A Factor Analysis of the Counselor Evaluation Rating Scale

    ERIC Educational Resources Information Center

    Loesch, Larry C.; Rucker, Barbara B.

    1977-01-01

    This study was conducted on the Counselor Evaluation Rating Scale (CERS). Ratings on 404 students from approximately 35 different supervisors were factor-analyzed using an oblique solution with rotation to simple loadings. It was concluded that the CERS has generally achieved the purposes intended by its authors. (Author)

  3. The Factor Structure and Screening Utility of the Social Interaction Anxiety Scale

    ERIC Educational Resources Information Center

    Rodebaugh, Thomas L.; Woods, Carol M.; Heimberg, Richard G.; Liebowitz, Michael R.; Schneier, Franklin R.

    2006-01-01

    The widely used Social Interaction Anxiety Scale (SIAS; R. P. Mattick & J. C. Clarke, 1998) possesses favorable psychometric properties, but questions remain concerning its factor structure and item properties. Analyses included 445 people with social anxiety disorder and 1,689 undergraduates. Simple unifactorial models fit poorly, and models that…

  4. A Confirmatory Factor Analysis of the Professional Opinion Scale

    ERIC Educational Resources Information Center

    Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.

    2007-01-01

    The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…

  5. Longitudinal Tests of Competing Factor Structures for the Rosenberg Self-Esteem Scale: Traits, Ephemeral Artifacts, and Stable Response Styles

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Scalas, L. Francesca; Nagengast, Benjamin

    2010-01-01

    Self-esteem, typically measured by the Rosenberg Self-Esteem Scale (RSE), is one of the most widely studied constructs in psychology. Nevertheless, there is broad agreement that a simple unidimensional factor model, consistent with the original design and typical application in applied research, does not provide an adequate explanation of RSE…

  6. Prognostic accuracy of five simple scales in childhood bacterial meningitis.

    PubMed

    Pelkonen, Tuula; Roine, Irmeli; Monteiro, Lurdes; Cruzeiro, Manuel Leite; Pitkäranta, Anne; Kataja, Matti; Peltola, Heikki

    2012-08-01

    In childhood acute bacterial meningitis, the level of consciousness, measured with the Glasgow coma scale (GCS) or the Blantyre coma scale (BCS), is the most important predictor of outcome. The Herson-Todd scale (HTS) was developed for Haemophilus influenzae meningitis. Our objective was to identify prognostic factors, to form a simple scale, and to compare the predictive accuracy of these scales. Seven hundred and twenty-three children with bacterial meningitis in Luanda were scored by GCS, BCS, and HTS. The simple Luanda scale (SLS), based on our entire database, comprised domestic electricity, days of illness, convulsions, consciousness, and dyspnoea at presentation. The Bayesian Luanda scale (BLS) added blood glucose concentration. The accuracy of the 5 scales was determined for 491 children without an underlying condition, against the outcomes of death, severe neurological sequelae or death, or a poor outcome (severe neurological sequelae, death, or deafness), at hospital discharge. The highest accuracy was achieved with the BLS, whose area under the curve (AUC) for death was 0.83, for severe neurological sequelae or death was 0.84, and for poor outcome was 0.82. Overall, the AUCs for SLS were ≥0.79, for GCS were ≥0.76, for BCS were ≥0.74, and for HTS were ≥0.68. Adding laboratory parameters to a simple scoring system, such as the SLS, improves the prognostic accuracy only little in bacterial meningitis.

  7. Function Invariant and Parameter Scale-Free Transformation Methods

    ERIC Educational Resources Information Center

    Bentler, P. M.; Wingard, Joseph A.

    1977-01-01

    A scale-invariant simple structure function of previously studied function components for principal component analysis and factor analysis is defined. First and second partial derivatives are obtained, and Newton-Raphson iterations are utilized. The resulting solutions are locally optimal and subjectively pleasing. (Author/JKS)

  8. MIS Score: Prediction Model for Minimally Invasive Surgery.

    PubMed

    Hu, Yuanyuan; Cao, Jingwei; Hou, Xianzeng; Liu, Guangcun

    2017-03-01

    Reports suggest that patients with spontaneous intracerebral hemorrhage (ICH) can benefit from minimally invasive surgery, but the inclusion criterion for operation is controversial. This article analyzes factors affecting the 30-day prognoses of patients who have received minimally invasive surgery and proposes a simple grading scale that represents clinical operation effectiveness. The records of 101 patients with spontaneous ICH presenting to Qianfoshan Hospital were reviewed. Factors affecting their 30-day prognosis were identified by logistic regression. A clinical grading scale, the MIS score, was developed by weighting the independent predictors based on these factors. Univariate analysis revealed that the factors that affect 30-day prognosis include Glasgow coma scale score (P < 0.01), age ≥80 years (P < 0.05), blood glucose (P < 0.01), ICH volume (P < 0.01), operation time (P < 0.05), and presence of intraventricular hemorrhage (P < 0.001). Logistic regression revealed that the factors that affect 30-day prognosis include Glasgow coma scale score (P < 0.05), age (P < 0.05), ICH volume (P < 0.01), and presence of intraventricular hemorrhage (P < 0.05). The MIS score was developed accordingly; 39 patients with 0-1 MIS scores had favorable prognoses, whereas only 9 patients with 2-5 MIS scores had poor prognoses. The MIS score is a simple grading scale that can be used to select patients who are suited for minimal invasive drainage surgery. When MIS score is 0-1, minimal invasive surgery is strongly recommended for patients with spontaneous cerebral hemorrhage. The scale merits further prospective studies to fully determine its efficacy. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Validation of the Simple Shoulder Test in a Portuguese-Brazilian population. Is the latent variable structure and validation of the Simple Shoulder Test Stable across cultures?

    PubMed

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Factor analysis demonstrated a three factor solution. Cronbach's alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples.

  10. Validation of the Simple Shoulder Test in a Portuguese-Brazilian Population. Is the Latent Variable Structure and Validation of the Simple Shoulder Test Stable across Cultures?

    PubMed Central

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    Background The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Objective The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Methods The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Results Factor analysis demonstrated a three factor solution. Cronbach’s alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. Conclusion The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples. PMID:23675436

  11. Fuzzy logic-based flight control system design

    NASA Astrophysics Data System (ADS)

    Nho, Kyungmoon

    The application of fuzzy logic to aircraft motion control is studied in this dissertation. The self-tuning fuzzy techniques are developed by changing input scaling factors to obtain a robust fuzzy controller over a wide range of operating conditions and nonlinearities for a nonlinear aircraft model. It is demonstrated that the properly adjusted input scaling factors can meet the required performance and robustness in a fuzzy controller. For a simple demonstration of the easy design and control capability of a fuzzy controller, a proportional-derivative (PD) fuzzy control system is compared to the conventional controller for a simple dynamical system. This thesis also describes the design principles and stability analysis of fuzzy control systems by considering the key features of a fuzzy control system including the fuzzification, rule-base and defuzzification. The wing-rock motion of slender delta wings, a linear aircraft model and the six degree of freedom nonlinear aircraft dynamics are considered to illustrate several self-tuning methods employing change in input scaling factors. Finally, this dissertation is concluded with numerical simulation of glide-slope capture in windshear demonstrating the robustness of the fuzzy logic based flight control system.

  12. Collision geometry scaling of Au+Au pseudorapidity density from √(sNN )=19.6 to 200 GeV

    NASA Astrophysics Data System (ADS)

    Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; García, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S.; Sukhanov, A.; Tonjes, M. B.; Tang, J.-L.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wolfs, F. L.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.

    2004-08-01

    The centrality dependence of the midrapidity charged particle multiplicity in Au+Au heavy-ion collisions at √(sNN )=19.6 and 200 GeV is presented. Within a simple model, the fraction of hard (scaling with number of binary collisions) to soft (scaling with number of participant pairs) interactions is consistent with a value of x=0.13±0.01 (stat) ±0.05 (syst) at both energies. The experimental results at both energies, scaled by inelastic p ( p¯ ) +p collision data, agree within systematic errors. The ratio of the data was found not to depend on centrality over the studied range and yields a simple linear scale factor of R200/19.6 =2.03±0.02 (stat) ±0.05 (syst) .

  13. Humans' perceptions of animal mentality: ascriptions of thinking.

    PubMed

    Rasmussen, J L; Rajecki, D W; Craft, H D

    1993-09-01

    On rating scales, 294 students indicated whether it was reasonable to say that a dog, cat, bird, fish, and school-age child had the capacity for 12 commonplace human mental operations or experiences. Factor analysis of responses identified 2 levels of attributions, simple thinking and complex thinking. The child and all animals were credited with simple thinking, but respondents were much more likely to ascribe complex thinking to the child. (A pilot study with 8 animal-behavior professionals generally replicated these results.) Certain mental categories (e.g., emotion) were judged by students to be simple for all target types; others (e.g., conservation) were judged to be universally complex. Further factoring revealed articulate ascriptions for key mental categories. Play and imagine was seen as simple in the animals but complex for the child, but enumeration and sorting and dream were seen as simple in the child but complex for the animals.

  14. Simple Assessment Techniques for Soil and Water. Environmental Factors in Small Scale Development Projects. Workshops.

    ERIC Educational Resources Information Center

    Coordination in Development, New York, NY.

    This booklet was produced in response to the growing need for reliable environmental assessment techniques that can be applied to small-scale development projects. The suggested techniques emphasize low-technology environmental analysis. Although these techniques may lack precision, they can be extremely valuable in helping to assure the success…

  15. Polarizable molecular interactions in condensed phase and their equivalent nonpolarizable models.

    PubMed

    Leontyev, Igor V; Stuchebrukhov, Alexei A

    2014-07-07

    Earlier, using phenomenological approach, we showed that in some cases polarizable models of condensed phase systems can be reduced to nonpolarizable equivalent models with scaled charges. Examples of such systems include ionic liquids, TIPnP-type models of water, protein force fields, and others, where interactions and dynamics of inherently polarizable species can be accurately described by nonpolarizable models. To describe electrostatic interactions, the effective charges of simple ionic liquids are obtained by scaling the actual charges of ions by a factor of 1/√(ε(el)), which is due to electronic polarization screening effect; the scaling factor of neutral species is more complicated. Here, using several theoretical models, we examine how exactly the scaling factors appear in theory, and how, and under what conditions, polarizable Hamiltonians are reduced to nonpolarizable ones. These models allow one to trace the origin of the scaling factors, determine their values, and obtain important insights on the nature of polarizable interactions in condensed matter systems.

  16. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.

    PubMed

    Li, Harbin; McNulty, Steven G

    2007-10-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.

  17. Relationship of a lichen species diversity indicator to environmental factors across the coterminous United States

    Treesearch

    Susan Will-Wolf; Mark J. Ambrose; Randall S. Morin

    2011-01-01

    We have investigated relationships between one simple indicator of lichen species diversity and environmental variables in forests across the coterminous United States. We want to know whether this indicator can help quantify the influence that factors such as climate and air quality have on lichen biodiversity at large scales and whether it will be useful in...

  18. All Together Now: Measuring Staff Cohesion in Special Education Classrooms

    PubMed Central

    Kratz, Hilary E.; Locke, Jill; Piotrowski, Zinnia; Ouellette, Rachel R.; Xie, Ming; Stahmer, Aubyn C.; Mandell, David S.

    2015-01-01

    This study sought to validate a new measure, the Classroom Cohesion Survey (CCS), designed to examine the relationship between teachers and classroom assistants in autism support classrooms. Teachers, classroom assistants, and external observers showed good inter-rater agreement on the CCS and good internal consistency for all scales. Simple factor structures were found for both teacher- and classroom assistant–rated scales, with one-factor solutions for both scales. Paired t tests revealed that on average, classroom assistants rated classroom cohesion stronger than teachers. The CCS may be an effective tool for measuring cohesion between classroom staff and may have an important impact on various clinical and implementation outcomes in school settings. PMID:26213443

  19. The use of modified scaling factors in the design of high-power, non-linear, transmitting rod-core antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Dvorak, Steven L.; Sternberg, Ben K.

    2010-10-01

    In this paper, we develop a technique for designing high-power, non-linear, transmitting rod-core antennas by using simple modified scale factors rather than running labor-intensive numerical models. By using modified scale factors, a designer can predict changes in magnetic moment, inductance, core series loss resistance, etc. We define modified scale factors as the case when all physical dimensions of the rod antenna are scaled by p, except for the cross-sectional area of the individual wires or strips that are used to construct the core. This allows one to make measurements on a scaled-down version of the rod antenna using the same core material that will be used in the final antenna design. The modified scale factors were derived from prolate spheroidal analytical expressions for a finite-length rod antenna and were verified with experimental results. The modified scaling factors can only be used if the magnetic flux densities within the two scaled cores are the same. With the magnetic flux density constant, the two scaled cores will operate with the same complex permeability, thus changing the non-linear problem to a quasi-linear problem. We also demonstrate that by holding the number of turns times the drive current constant, while changing the number of turns, the inductance and core series loss resistance change by the number of turns squared. Experimental measurements were made on rod cores made from varying diameters of black oxide, low carbon steel wires and different widths of Metglas foil. Furthermore, we demonstrate that the modified scale factors work even in the presence of eddy currents within the core material.

  20. On the consideration of scaling properties of extreme rainfall in Madrid (Spain) for developing a generalized intensity-duration-frequency equation and assessing probable maximum precipitation estimates

    NASA Astrophysics Data System (ADS)

    Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel

    2018-01-01

    The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.

  1. Fabrication of micron scale metallic structures on photo paper substrates by low temperature photolithography for device applications

    NASA Astrophysics Data System (ADS)

    Cooke, M. D.; Wood, D.

    2015-11-01

    Using commercial standard paper as a substrate has a significant cost reduction implication over other more expensive substrate materials by approximately a factor of 100 (Shenton et al 2015 EMRS Spring Meeting; Zheng et al 2013 Nat. Sci. Rep. 3 1786). Discussed here is a novel process which allows photolithography and etching of simple metal films deposited on paper substrates, but requires no additional facilities to achieve it. This allows a significant reduction in feature size down to the micron scale over devices made using more conventional printing solutions which are of the order of tens of microns. The technique has great potential for making cheap disposable devices with additional functionality, which could include flexibility and foldability, simple disposability, porosity and low weight requirements. The potential for commercial applications and scale up is also discussed.

  2. [Study on the related factors of suicidal ideation in college undergraduates].

    PubMed

    Gao, Hong-sheng; Qu, Cheng-yi; Miao, Mao-hua

    2003-09-01

    To evaluate psychosocial factors and patterns on suicidal ideation of the undergraduates in Shanxi province. Four thousand eight hundred and eighty-two undergraduates in Shanxi province were investigated with multistage stratified random clustered samples. Factors associated with suicidal ideation were analyzed with logistic regression and Path analysis by scores of Beck Scale for Suicide Ideation (BSSI), Suicide Attitude Questionnaire (QSA), Adolescent Self-Rate Life Events Check List (ASLEC), DSQ, Social Support Rating Scale, SCL-90, Simple Coping Modes Questionnaire and EPQ. Tendency of psychological disorder was the major factor. Negative life events did not directly affect suicidal ideation, but personality did directly or indirectly affect suicidal ideation through coping and defensive response. Personality played a stabilized fundamental role while life events were minor but "triggering" agents. Mental disturbance disposition seemed to be the principal factor related to suicidal ideation. Above three factors were intergraded and resulted in suicidal ideation in chorus.

  3. Effect of Varying the 1-4 Intramolecular Scaling Factor in Atomistic Simulations of Long-Chain N-alkanes with the OPLS-AA Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, Valmor F; Ye, Xianggui; Cui, Shengting

    2013-01-01

    A comprehensive molecular dynamics simulation study of n-alkanes using the Optimized Potential for Liquid Simulation-All Atoms (OPLS-AA) force field at ambient condition has been performed. Our results indicate that while simulations with the OPLS-AA force field accurately predict the liquid state mass density for n-alkanes with carbon number equal or less than 10, for n-alkanes with carbon number equal or exceeding 12, the OPLS-AA force field with the standard scaling factor for the 1-4 intramolecular Van der Waals and electrostatic interaction gives rise to a quasi-crystalline structure. We found that accurate predictions of the liquid state properties are obtained bymore » successively reducing the aforementioned scaling factor for each increase of the carbon number beyond n-dodecane. To better un-derstand the effects of reducing the scaling factor, we analyzed the variation of the torsion potential pro-file with the scaling factor, and the corresponding impact on the gauche-trans conformer distribution, heat of vaporization, melting point, and self-diffusion coefficient for n-dodecane. This relatively simple procedure thus allows for more accurate predictions of the thermo-physical properties of longer n-alkanes.« less

  4. An energetic scale for equilibrium H/D fractionation factors illuminates hydrogen bond free energies in proteins

    PubMed Central

    Cao, Zheng; Bowie, James U

    2014-01-01

    Equilibrium H/D fractionation factors have been extensively employed to qualitatively assess hydrogen bond strengths in protein structure, enzyme active sites, and DNA. It remains unclear how fractionation factors correlate with hydrogen bond free energies, however. Here we develop an empirical relationship between fractionation factors and free energy, allowing for the simple and quantitative measurement of hydrogen bond free energies. Applying our empirical relationship to prior fractionation factor studies in proteins, we find: [1] Within the folded state, backbone hydrogen bonds are only marginally stronger on average in α-helices compared to β-sheets by ∼0.2 kcal/mol. [2] Charge-stabilized hydrogen bonds are stronger than neutral hydrogen bonds by ∼2 kcal/mol on average, and can be as strong as –7 kcal/mol. [3] Changes in a few hydrogen bonds during an enzyme catalytic cycle can stabilize an intermediate state by –4.2 kcal/mol. [4] Backbone hydrogen bonds can make a large overall contribution to the energetics of conformational changes, possibly playing an important role in directing conformational changes. [5] Backbone hydrogen bonding becomes more uniform overall upon ligand binding, which may facilitate participation of the entire protein structure in events at the active site. Our energetic scale provides a simple method for further exploration of hydrogen bond free energies. PMID:24501090

  5. Sequence analysis reveals genomic factors affecting EST-SSR primer performance and polymorphism

    USDA-ARS?s Scientific Manuscript database

    Search for simple sequence repeat (SSR) motifs and design of flanking primers in expressed sequence tag (EST) sequences can be easily done at a large scale using bioinformatics programs. However, failed amplification and/or detection, along with lack of polymorphism, is often seen among randomly sel...

  6. A simple phenomenological model for grain clustering in turbulence

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-01-01

    We propose a simple model for density fluctuations of aerodynamic grains, embedded in a turbulent, gravitating gas disc. The model combines a calculation for the behaviour of a group of grains encountering a single turbulent eddy, with a hierarchical approximation of the eddy statistics. This makes analytic predictions for a range of quantities including: distributions of grain densities, power spectra and correlation functions of fluctuations, and maximum grain densities reached. We predict how these scale as a function of grain drag time ts, spatial scale, grain-to-gas mass ratio tilde{ρ }, strength of turbulence α, and detailed disc properties. We test these against numerical simulations with various turbulence-driving mechanisms. The simulations agree well with the predictions, spanning ts Ω ˜ 10-4-10, tilde{ρ }˜ 0{-}3, α ˜ 10-10-10-2. Results from `turbulent concentration' simulations and laboratory experiments are also predicted as a special case. Vortices on a wide range of scales disperse and concentrate grains hierarchically. For small grains this is most efficient in eddies with turnover time comparable to the stopping time, but fluctuations are also damped by local gas-grain drift. For large grains, shear and gravity lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. The grain density distribution has a log-Poisson shape, with fluctuations for large grains up to factors ≳1000. We provide simple analytic expressions for the predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.

  7. PHOBOS Overview

    NASA Astrophysics Data System (ADS)

    Hofman, David J.; Phobos Collaboration; Bbback; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Chai, Z.; Decowski, M. P.; García, E.; Gburek, T.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Hauer, M.; Heintzelman, G. A.; Henderson, C.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Khan, N.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Reed, C.; Roland, C.; Roland, G.; Sagerer, J.; Seals, H.; Sedykh, I.; Smith, C. E.; Stankiewicz, M. A.; Steinberg, P.; Stephans, G. S. F.; Sukhanov, A.; Tonjes, M. B.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Vaurynovich, S. S.; Verdier, R.; Veres, G. I.; Wenger, E.; Wolfs, F. L. H.; Wosiek, B.; Kwoźniak; Wysłouch, B.

    2006-11-01

    A brief overview of the current results and conclusions from the PHOBOS experiment at the Relativistic Heavy Ion Collider (RHIC) is given. No evidence is found for non-monotonic behavior of observables measured by PHOBOS in the RHIC energy region. Convincing evidence is found that we have created a state of matter with high energy-density, that is nearly net-baryon free and is strongly interacting. The data are found to exhibit "simple" scaling behaviors, which include extended longitudinal scaling and scaling with the number of participating nucleons. The Au+Au collision charged particle data also exhibit a remarkable factorization of collision energy and geometry.

  8. Physics and biochemical engineering: 3

    NASA Astrophysics Data System (ADS)

    Fairbrother, Robert; Riddle, Wendy; Fairbrother, Neil

    2006-09-01

    Once an antibiotic has been produced on a large scale, as described in our preceding articles, it has to be extracted and purified. Filtration and centrifugation are the two main ways of doing this, and the design of industrial processing systems is governed by simple physics involving factors such as pressure, viscosity and rotational motion.

  9. Rhodium-catalyzed kinetic resolution of tertiary homoallyl alcohols via stereoselective carbon-carbon bond cleavage.

    PubMed

    Shintani, Ryo; Takatsu, Keishi; Hayashi, Tamio

    2008-03-20

    A nonenzymatic kinetic resolution of tertiary homoallyl alcohols has been developed through a rhodium-catalyzed retro-allylation reaction under simple conditions. Selectivity factors of up to 12 have been achieved by employing (R)-H8-binap as the ligand, and the reaction can be conducted on a preparative scale.

  10. Beyond Simple Participation: Providing a Reliable Informal Assessment Tool of Student Engagement for Teachers

    ERIC Educational Resources Information Center

    Wagetti, Rebecca J.; Johnston, Patricia; Jones, Leslie B.

    2017-01-01

    Educators have realized the importance of engaging students in learning. Teachers often see participatory behaviors like "hand raising" as evidence of students being engaged in an activity. These indications of engagement do not capture motivational factors behind true engagement. A research team developed a five item scale to easily…

  11. The simple procedure for the fluxgate magnetometers calibration

    NASA Astrophysics Data System (ADS)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the Coil Calibration system reveals, that the achieved accuracy (<0.04 % for scale factors and 0.03 degrees of arc for angle errors) is sufficient for many applications, particularly for satisfying the INTERMAGNET requirements to 1-second instruments.

  12. Measuring Filial Piety in the 21st Century: Development, Factor Structure, and Reliability of the 10-Item Contemporary Filial Piety Scale.

    PubMed

    Lum, Terry Y S; Yan, Elsie C W; Ho, Andy H Y; Shum, Michelle H Y; Wong, Gloria H Y; Lau, Mandy M Y; Wang, Junfang

    2016-11-01

    The experience and practice of filial piety have evolved in modern Chinese societies, and existing measures fail to capture these important changes. Based on a conceptual analysis on current literature, 42 items were initially compiled to form a Contemporary Filial Piety Scale (CFPS), and 1,080 individuals from a representative sample in Hong Kong were surveyed. Principal component analysis generated a 16-item three-factor model: Pragmatic Obligations (Factor 1; 10 items), Compassionate Reverence (Factor 2; 4 items), and Family Continuity (Factor 3; 2 items). Confirmatory factor analysis revealed strong factor loadings for Factors 1 and 2, while removing Factor 3 and conceptually duplicated items increased total variance explained from 58.02% to 60.09% and internal consistency from .84 to .88. A final 10-item two-factor structure model was adopted with a goodness of fit of 0.95. The CFPS-10 is a data-driven, simple, and efficient instrument with strong psychometric properties for assessing contemporary filial piety. © The Author(s) 2015.

  13. Frequency and zero-point vibrational energy scale factors for double-hybrid density functionals (and other selected methods): can anharmonic force fields be avoided?

    PubMed

    Kesharwani, Manoj K; Brauer, Brina; Martin, Jan M L

    2015-03-05

    We have obtained uniform frequency scaling factors λ(harm) (for harmonic frequencies), λ(fund) (for fundamentals), and λ(ZPVE) (for zero-point vibrational energies (ZPVEs)) for the Weigend-Ahlrichs and other selected basis sets for MP2, SCS-MP2, and a variety of DFT functionals including double hybrids. For selected levels of theory, we have also obtained scaling factors for true anharmonic fundamentals and ZPVEs obtained from quartic force fields. For harmonic frequencies, the double hybrids B2PLYP, B2GP-PLYP, and DSD-PBEP86 clearly yield the best performance at RMSD = 10-12 cm(-1) for def2-TZVP and larger basis sets, compared to 5 cm(-1) at the CCSD(T) basis set limit. For ZPVEs, again, the double hybrids are the best performers, reaching root-mean-square deviations (RMSDs) as low as 0.05 kcal/mol, but even mainstream functionals like B3LYP can get down to 0.10 kcal/mol. Explicitly anharmonic ZPVEs only are marginally more accurate. For fundamentals, however, simple uniform scaling is clearly inadequate.

  14. Detour factors in water and plastic phantoms and their use for range and depth scaling in electron-beam dosimetry.

    PubMed

    Fernández-Varea, J M; Andreo, P; Tabata, T

    1996-07-01

    Average penetration depths and detour factors of 1-50 MeV electrons in water and plastic materials have been computed by means of analytical calculation, within the continuous-slowing-down approximation and including multiple scattering, and using the Monte Carlo codes ITS and PENELOPE. Results are compared to detour factors from alternative definitions previously proposed in the literature. Different procedures used in low-energy electron-beam dosimetry to convert ranges and depths measured in plastic phantoms into water-equivalent ranges and depths are analysed. A new simple and accurate scaling method, based on Monte Carlo-derived ratios of average electron penetration depths and thus incorporating the effect of multiple scattering, is presented. Data are given for most plastics used in electron-beam dosimetry together with a fit which extends the method to any other low-Z plastic material. A study of scaled depth-dose curves and mean energies as a function of depth for some plastics of common usage shows that the method improves the consistency and results of other scaling procedures in dosimetry with electron beams at therapeutic energies.

  15. Preliminary psychometric testing of the Fox Simple Quality-of-Life Scale.

    PubMed

    Fox, Sherry

    2004-06-01

    Although quality of life is extensively defined as subjective and multidimensional with both affective and cognitive components, few instruments capture important dimensions of the construct, and few are both conceptually congruent and user friendly for the clinical setting. The aim of this study was to develop and test a measure that would be easy to use clinically and capture both cognitive and affective components of quality of life. Initial item sources for the Fox Simple Quality-of-Life Scale (FSQOLS) were literature-based. Thirty items were compiled for content validity assessment by a panel of expert healthcare clinicians from various disciplines, predominantly nursing. Five items were removed as a result of the review because they reflected negatively worded or redundant items. The 25-item scale was mailed to 177 people with lung, colon, and ovarian cancer in various stages. Cancer types were selected theoretically, based on similarity in prognosis, degree of symptom burden, and possible meaning and experience. Of the 145 participants, all provided complete data on the FSQOLS. Psychometric evaluation of the FSQOLS included item-total correlations, principal components analysis with varimax rotation revealing two factors explaining 50% variance, reliability estimation using alpha estimates, and item-factor correlations. The FSQOLS exhibited significant convergent validity with four popular quality-of-life instruments: the Ferrans and Powers Quality of Life Index, the Functional Assessment of Cancer Therapy Scale, the Short-Form-36 Health Survey, and the General Well-Being Scale. Content validity of the scale was explored and supported using qualitative interviews of 14 participants with lung, colon and ovarian cancer, who were a subgroup of the sample for the initial instrument testing.

  16. Effects of land use on lake nutrients: The importance of scale, hydrologic connectivity, and region

    USGS Publications Warehouse

    Soranno, Patricia A.; Cheruvelil, Kendra Spence; Wagner, Tyler; Webster, Katherine E.; Bremigan, Mary Tate

    2015-01-01

    Catchment land uses, particularly agriculture and urban uses, have long been recognized as major drivers of nutrient concentrations in surface waters. However, few simple models have been developed that relate the amount of catchment land use to downstream freshwater nutrients. Nor are existing models applicable to large numbers of freshwaters across broad spatial extents such as regions or continents. This research aims to increase model performance by exploring three factors that affect the relationship between land use and downstream nutrients in freshwater: the spatial extent for measuring land use, hydrologic connectivity, and the regional differences in both the amount of nutrients and effects of land use on them. We quantified the effects of these three factors that relate land use to lake total phosphorus (TP) and total nitrogen (TN) in 346 north temperate lakes in 7 regions in Michigan, USA. We used a linear mixed modeling framework to examine the importance of spatial extent, lake hydrologic class, and region on models with individual lake nutrients as the response variable, and individual land use types as the predictor variables. Our modeling approach was chosen to avoid problems of multi-collinearity among predictor variables and a lack of independence of lakes within regions, both of which are common problems in broad-scale analyses of freshwaters. We found that all three factors influence land use-lake nutrient relationships. The strongest evidence was for the effect of lake hydrologic connectivity, followed by region, and finally, the spatial extent of land use measurements. Incorporating these three factors into relatively simple models of land use effects on lake nutrients should help to improve predictions and understanding of land use-lake nutrient interactions at broad scales.

  17. Effects of Land Use on Lake Nutrients: The Importance of Scale, Hydrologic Connectivity, and Region

    PubMed Central

    Soranno, Patricia A.; Cheruvelil, Kendra Spence; Wagner, Tyler; Webster, Katherine E.; Bremigan, Mary Tate

    2015-01-01

    Catchment land uses, particularly agriculture and urban uses, have long been recognized as major drivers of nutrient concentrations in surface waters. However, few simple models have been developed that relate the amount of catchment land use to downstream freshwater nutrients. Nor are existing models applicable to large numbers of freshwaters across broad spatial extents such as regions or continents. This research aims to increase model performance by exploring three factors that affect the relationship between land use and downstream nutrients in freshwater: the spatial extent for measuring land use, hydrologic connectivity, and the regional differences in both the amount of nutrients and effects of land use on them. We quantified the effects of these three factors that relate land use to lake total phosphorus (TP) and total nitrogen (TN) in 346 north temperate lakes in 7 regions in Michigan, USA. We used a linear mixed modeling framework to examine the importance of spatial extent, lake hydrologic class, and region on models with individual lake nutrients as the response variable, and individual land use types as the predictor variables. Our modeling approach was chosen to avoid problems of multi-collinearity among predictor variables and a lack of independence of lakes within regions, both of which are common problems in broad-scale analyses of freshwaters. We found that all three factors influence land use-lake nutrient relationships. The strongest evidence was for the effect of lake hydrologic connectivity, followed by region, and finally, the spatial extent of land use measurements. Incorporating these three factors into relatively simple models of land use effects on lake nutrients should help to improve predictions and understanding of land use-lake nutrient interactions at broad scales. PMID:26267813

  18. A simple, analytical, axisymmetric microburst model for downdraft estimation

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.

    1991-01-01

    A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.

  19. Factors Influencing Outcomes after Ulnar Nerve Stability-Based Surgery for Cubital Tunnel Syndrome: A Prospective Cohort Study

    PubMed Central

    Kang, Ho Jung; Oh, Won Taek; Koh, Il Hyun; Kim, Sungmin

    2016-01-01

    Purpose Simple decompression of the ulnar nerve has outcomes similar to anterior transposition for cubital tunnel syndrome; however, there is no consensus on the proper technique for patients with an unstable ulnar nerve. We hypothesized that 1) simple decompression or anterior ulnar nerve transposition, depending on nerve stability, would be effective for cubital tunnel syndrome and that 2) there would be determining factors of the clinical outcome at two years. Materials and Methods Forty-one patients with cubital tunnel syndrome underwent simple decompression (n=30) or anterior transposition (n=11) according to an assessment of intra-operative ulnar nerve stability. Clinical outcome was assessed using grip and pinch strength, two-point discrimination, the mean of the disabilities of arm, shoulder, and hand (DASH) survey, and the modified Bishop Scale. Results Preoperatively, two patients were rated as mild, another 20 as moderate, and the remaining 19 as severe according to the Dellon Scale. At 2 years after operation, mean grip/pinch strength increased significantly from 19.4/3.2 kg to 31.1/4.1 kg, respectively. Two-point discrimination improved from 6.0 mm to 3.2 mm. The DASH score improved from 31.0 to 14.5. All but one patient scored good or excellent according to the modified Bishop Scale. Correlations were found between the DASH score at two years and age, pre-operative grip strength, and two-point discrimination. Conclusion An ulnar nerve stability-based approach to surgery selection for cubital tunnel syndrome was effective based on 2-year follow-up data. Older age, worse preoperative grip strength, and worse two-point discrimination were associated with worse outcomes at 2 years. PMID:26847300

  20. Exploring item and higher order factor structure with the Schmid-Leiman solution: syntax codes for SPSS and SAS.

    PubMed

    Wolff, Hans-Georg; Preising, Katja

    2005-02-01

    To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.

  1. Simple scaling of cooperation in donor-recipient games.

    PubMed

    Berger, Ulrich

    2009-09-01

    We present a simple argument which proves a general version of the scaling phenomenon recently observed in donor-recipient games by Tanimoto [Tanimoto, J., 2009. A simple scaling of the effectiveness of supporting mutual cooperation in donor-recipient games by various reciprocity mechanisms. BioSystems 96, 29-34].

  2. The "ick" factor, anticipated regret, and willingness to become an organ donor.

    PubMed

    O'Carroll, Ronan E; Foster, Catherine; McGeechan, Grant; Sandford, Kayleigh; Ferguson, Eamonn

    2011-03-01

    This research tested the role of traditional rational-cognitive factors and emotional barriers to posthumous organ donation. An example of an emotional barrier is the "ick" factor, a basic disgust reaction to the idea of organ donation. We also tested the potential role of manipulating anticipated regret to increase intention to donate in people who are not yet registered organ donors. In three experiments involving 621 members of the United Kingdom general public, participants were invited to complete questionnaire measures tapping potential emotional affective attitude barriers such as the "ick" factor, the desire to retain bodily integrity after death, and medical mistrust. Registered posthumous organ donors were compared with nondonors. In Experiments 2 and 3, nondonors were then allocated to a simple anticipated regret manipulation versus a control condition, and the impact on intention to donate was tested. Self-reported emotional barriers and intention to donate in the future. Traditional rational-cognitive factors such as knowledge, attitude, and subjective norm failed to distinguish donors from nondonors. However, in all three experiments, nondonors scored significantly higher than donors on the emotional "ick" factor and bodily integrity scales. A simple anticipated regret manipulation led to a significant increase in intention to register as an organ donor in future. Negative affective attitudes are thus crucial barriers to people registering as organ donors. A simple anticipated regret manipulation has the potential to significantly increase organ donation rates. (c) 2011 APA, all rights reserved

  3. Measurement of ethical food choice motives.

    PubMed

    Lindeman, M; Väänänen, M

    2000-02-01

    The two studies describe the development of three complementary scales to the Food Choice Questionnaire developed by Steptoe, Pollard & Wardle (1995). The new items address various ethical food choice motives and were derived from previous studies on vegetarianism and ethical food choice. The items were factor analysed in Study 1 (N=281) and the factor solution was confirmed in Study 2 (N=125), in which simple validity criteria were also included. Furthermore, test-retest reliability was assessed with a separate sample of subjects (N=36). The results indicated that the three new scales, Ecological Welfare (including subscales for Animal Welfare and Environment Protection), Political Values and Religion, are reliable and valid instruments for a brief screening of ethical food choice reasons. Copyright 2000 Academic Press.

  4. The factor structure and screening utility of the Social Interaction Anxiety Scale.

    PubMed

    Rodebaugh, Thomas L; Woods, Carol M; Heimberg, Richard G; Liebowitz, Michael R; Schneier, Franklin R

    2006-06-01

    The widely used Social Interaction Anxiety Scale (SIAS; R. P. Mattick & J. C. Clarke, 1998) possesses favorable psychometric properties, but questions remain concerning its factor structure and item properties. Analyses included 445 people with social anxiety disorder and 1,689 undergraduates. Simple unifactorial models fit poorly, and models that accounted for differences due to item wording (i.e., reverse scoring) provided superior fit. It was further found that clients and undergraduates approached some items differently, and the SIAS may be somewhat overly conservative in selecting analogue participants from an undergraduate sample. Overall, this study provides support for the excellent properties of the SIAS's straightforwardly worded items, although questions remain regarding its reverse-scored items. Copyright 2006 APA, all rights reserved.

  5. Do people trust dentists? Development of the Dentist Trust Scale.

    PubMed

    Armfield, J M; Ketting, M; Chrisopoulos, S; Baker, S R

    2017-09-01

    This study aimed to adapt a measure of trust in physicians to trust in dentists and to assess the reliability and validity of the measure. Questionnaire data were collected from a simple random sample of 596 Australian adults. The 11-item General Trust in Physicians Scale was modified to apply to dentists. The Dentist Trust Scale (DTS) had good internal consistency (α = 0.92) and exploratory factor analysis revealed a single-factor solution. Lower DTS scores were associated with less trust in the dentist last visited, having previously changed dentists due to unhappiness with the care received, currently having dental pain, usual visiting frequency, dental avoidance, and with past experiences of discomfort, gagging, fainting, embarrassment and personal problems with the dentist. The majority of people appear to exhibit trust in dentists. The DTS shows promising reliability and validity evidence. © 2017 Australian Dental Association.

  6. A cooperation and competition based simple cell receptive field model and study of feed-forward linear and nonlinear contributions to orientation selectivity.

    PubMed

    Bhaumik, Basabi; Mathur, Mona

    2003-01-01

    We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.

  7. Proportion of general factor variance in a hierarchical multiple-component measuring instrument: a note on a confidence interval estimation procedure.

    PubMed

    Raykov, Tenko; Zinbarg, Richard E

    2011-05-01

    A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.

  8. Scaled position-force tracking for wireless teleoperation of miniaturized surgical robotic system.

    PubMed

    Guo, Jing; Liu, Chao; Poignet, Philippe

    2014-01-01

    Miniaturized surgical robotic system presents promising trend for reducing invasiveness during operation. However, cables used for power and communication may affect its performance. In this paper we chose Zigbee wireless communication as a means to replace communication cables for miniaturized surgical robot. Nevertheless, time delay caused by wireless communication presents a new challenge to performance and stability of the teleoperation system. We proposed a bilateral wireless teleoperation architecture taking into consideration of the effect of position-force scaling between operator and slave. Optimal position-force tracking performance is obtained and the overall system is shown to be passive with a simple condition on the scaling factors satisfied. Simulation studies verify the efficiency of the proposed scaled wireless teleoperation scheme.

  9. Confirmatory Factor Analysis of the Malay Version of the Confusion, Hubbub and Order Scale (CHAOS-6) among Myocardial Infarction Survivors in a Malaysian Cardiac Healthcare Facility.

    PubMed

    Ganasegeran, Kurubaran; Selvaraj, Kamaraj; Rashid, Abdul

    2017-08-01

    The six item Confusion, Hubbub and Order Scale (CHAOS-6) has been validated as a reliable tool to measure levels of household disorder. We aimed to investigate the goodness of fit and reliability of a new Malay version of the CHAOS-6. The original English version of the CHAOS-6 underwent forward-backward translation into the Malay language. The finalised Malay version was administered to 105 myocardial infarction survivors in a Malaysian cardiac health facility. We performed confirmatory factor analyses (CFAs) using structural equation modelling. A path diagram and fit statistics were yielded to determine the Malay version's validity. Composite reliability was tested to determine the scale's reliability. All 105 myocardial infarction survivors participated in the study. The CFA yielded a six-item, one-factor model with excellent fit statistics. Composite reliability for the single factor CHAOS-6 was 0.65, confirming that the scale is reliable for Malay speakers. The Malay version of the CHAOS-6 was reliable and showed the best fit statistics for our study sample. We thus offer a simple, brief, validated, reliable and novel instrument to measure chaos, the Skala Kecelaruan, Keriuhan & Tertib Terubahsuai (CHAOS-6) , for the Malaysian population.

  10. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing. CRESST Report 830

    ERIC Educational Resources Information Center

    Cai, Li

    2013-01-01

    Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…

  11. Grave prognosis on spontaneous intracerebral haemorrhage: GP on STAGE score.

    PubMed

    Poungvarin, Niphon; Suwanwela, Nijasri C; Venketasubramanian, Narayanaswamy; Wong, Lawrence K S; Navarro, Jose C; Bitanga, Ester; Yoon, Byung Woo; Chang, Hui M; Alam, Sardar M

    2006-11-01

    Spontaneous intracerebral haemorrhage (ICH) is more common in Asia than in western countries, and has a high mortality rate. A simple prognostic score for predicting grave prognosis of ICH is lacking. Our objective was to develop a simple and reliable score for most physicians. ICH patients from seven Asian countries were enrolled between May 2000 and April 2002 for a prospective study. Clinical features such as headache and vomiting, vascular risk factors, Glasgow coma scale (GCS), body temperature (BT), blood pressure on arrival, location and size of haematoma, intraventricular haemorrhage (IVH), hydrocephalus, need for surgical treatment, medical treatment, length of hospital stay and other complications were analyzed to determine the outcome using a modified Rankin scale (MRS). Grave prognosis (defined as MRS of 5-6) was judged on the discharge date. 995 patients, mean age 59.5 +/- 14.3 years were analyzed, after exclusion of incomplete data in 87 patients. 402 patients (40.4%) were in the grave prognosis group (MRS 5-6). Univariable analysis and then multivariable analysis showed only four statistically significant predictors for grave outcome of ICH. They were fever (BT > or = 37.8 degrees c), low GCS, large haematoma and IVH. The grave prognosis on spontaneous intracerebral haemorrhage (GP on STAGE) score was derived from these four factors using a multiple logistic model. A simple and pragmatic prognostic score for ICH outcome has been developed with high sensitivity (82%) and specificity (82%). Furthermore, it can be administered by most general practitioners. Validation in other populations is now required.

  12. A psychometric evaluation of the four-item version of the Control Attitudes Scale for patients with cardiac disease and their partners.

    PubMed

    Årestedt, Kristofer; Ågren, Susanna; Flemme, Inger; Moser, Debra K; Strömberg, Anna

    2015-08-01

    The four-item Control Attitudes Scale (CAS) was developed to measure control perceived by patients with cardiac disease and their family members, but extensive psychometric evaluation has not been performed. The aim was to translate, culturally adapt and psychometrically evaluate the CAS in a Swedish sample of implantable cardioverter defibrillator (ICD) recipients, heart failure (HF) patients and their partners. A sample (n=391) of ICD recipients, HF patients and partners were used. Descriptive statistics, item-total and inter-item correlations, exploratory factor analysis, ordinal regression modelling and Cronbach's alpha were used to validate the CAS. The findings from the factor analyses revealed that the CAS is a multidimensional scale including two factors, Control and Helplessness. The internal consistency was satisfactory for all scales (α=0.74-0.85), except the family version total scale (α=0.62). No differential item functioning was detected which implies that the CAS can be used to make invariant comparisons between groups of different age and sex. The psychometric properties, together with the simple and short format of the CAS, make it to a useful tool for measuring perceived control among patients with cardiac diseases and their family members. When using the CAS, subscale scores should be preferred. © The European Society of Cardiology 2014.

  13. [ETAP: A smoking scale for Primary Health Care].

    PubMed

    González Romero, Pilar María; Cuevas Fernández, Francisco Javier; Marcelino Rodríguez, Itahisa; Rodríguez Pérez, María Del Cristo; Cabrera de León, Antonio; Aguirre-Jaime, Armando

    2016-05-01

    To obtain a scale of tobacco exposure to address smoking cessation. Follow-up of a cohort. Scale validation. Primary Care Research Unit. Tenerife. A total of 6729 participants from the "CDC de Canarias" cohort. A scale was constructed under the assumption that the time of exposure to tobacco is the key factor to express accumulated risk. Discriminant validity was tested on prevalent cases of acute myocardial infarction (AMI; n=171), and its best cut-off for preventive screening was obtained. Its predictive validity was tested with incident cases of AMI (n=46), comparing the predictive power with markers (age, sex) and classic risk factors of AMI (hypertension, diabetes, dyslipidaemia), including the pack-years index (PYI). The scale obtained was the sum of three times the years that they had smoked plus years exposed to smoking at home and at work. The frequency of AMI increased with the values of the scale, with the value 20 years of exposure being the most appropriate cut-off for preventive action, as it provided adequate predictive values for incident AMI. The scale surpassed PYI in predicting AMI, and competed with the known markers and risk factors. The proposed scale allows a valid measurement of exposure to smoking and provides a useful and simple approach that can help promote a willingness to change, as well as prevention. It still needs to demonstrate its validity, taking as reference other problems associated with smoking. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  14. The serial use of child neurocognitive tests: development versus practice effects.

    PubMed

    Slade, Peter D; Townes, Brenda D; Rosenbaum, Gail; Martins, Isabel P; Luis, Henrique; Bernardo, Mario; Martin, Michael D; Derouen, Timothy A

    2008-12-01

    When serial neurocognitive assessments are performed, 2 main factors are of importance: test-retest reliability and practice effects. With children, however, there is a third, developmental factor, which occurs as a result of maturation. Child tests recognize this factor through the provision of age-corrected scaled scores. Thus, a ready-made method for estimating the relative contribution of developmental versus practice effects is the comparison of raw (developmental and practice) and scaled (practice only) scores. Data from a pool of 507 Portuguese children enrolled in a study of dental amalgams (T. A. DeRouen, B. G. Leroux, et al., 2002; T. A. DeRouen, M. D. Martin, et al., 2006) showed that practice effects over a 5-year period varied on 8 neurocognitive tests. Simple regression equations are provided for calculating individual retest scores from initial test scores. (c) 2008 APA, all rights reserved.

  15. Italian cross-cultural adaptation and validation of three different scales for the evaluation of shoulder pain and dysfunction after neck dissection: University of California - Los Angeles (UCLA) Shoulder Scale, Shoulder Pain and Disability Index (SPADI) and Simple Shoulder Test (SST).

    PubMed

    Marchese, C; Cristalli, G; Pichi, B; Manciocco, V; Mercante, G; Pellini, R; Marchesi, P; Sperduti, I; Ruscito, P; Spriano, G

    2012-02-01

    Shoulder syndrome after neck dissection is a well known entity, but its incidence and prognostic factors influencing recovery have not been clearly assessed due to the heterogeneity of possible evaluations. The University of California - Los Angeles (UCLA) Shoulder Scale, the Shoulder Pain and Disability Index (SPADI) and the Simple Shoulder Test (SST) are three English-language questionnaires commonly used to test shoulder impairment. An Italian version of these scales is not available. The aim of the present study was to translate, culturally adapt and validate an Italian version of UCLA Shoulder Scale, SPADI and SST. Translation and cross-cultural adaptation of the SPADI, the UCLA shoulder scale and the SST was performed according to the international guidelines. Sixty-six patients treated with neck dissection for head and neck cancer were called to draw up these scales. Forty patients completed the same questionnaires a second time one week after the first to test the reproducibility of the Italian versions. All the English-speaking Italian patients (n = 11) were asked to complete both the English and the Italian versions of the three questionnaires to validate the scales. No major problems regarding the content or the language were found during the translation of the 3 questionnaires. For all three scales, Cronbach's α was > 0.89. The Pearson correlation coefficient was r > 0.91. With respect to validity, there was a significant correlation between the Italian and the English versions of all three scales. This study shows that the Italian versions of UCLA Shoulder Scale, SPADI and SST are valid instruments for the evaluation of shoulder dysfunction after neck dissection in Italian patients.

  16. Disability: a model and measurement technique.

    PubMed Central

    Williams, R G; Johnston, M; Willis, L A; Bennett, A E

    1976-01-01

    Current methods of ranking or scoring disability tend to be arbitrary. A new method is put forward on the hypothesis that disability progresses in regular, cumulative patterns. A model of disability is defined and tested with the use of Guttman scale analysis. Its validity is indicated on data from a survey in the community and from postsurgical patients, and some factors involved in scale variation are identified. The model provides a simple measurement technique and has implications for the assessment of individual disadvantage, for the prediction of progress in recovery or deterioration, and for evaluation of the outcome of treatment regimes. PMID:953379

  17. Evaluation of the reliability and validity for X16 balance testing scale for the elderly.

    PubMed

    Ju, Jingjuan; Jiang, Yu; Zhou, Peng; Li, Lin; Ye, Xiaolei; Wu, Hongmei; Shen, Bin; Zhang, Jialei; He, Xiaoding; Niu, Chunjin; Xia, Qinghua

    2018-05-10

    Balance performance is considered as an indicator of functional status in the elderly, a large scale population screening and evaluation in the community context followed by proper interventions would be of great significance at public health level. However, there has been no suitable balance testing scale available for large scale studies in the unique community context of urban China. A balance scale named X16 balance testing scale was developed, which was composed of 3 domains and 16 items. A total of 1985 functionally independent and active community-dwelling elderly adults' balance abilities were tested using the X16 scale. The internal consistency, split-half reliability, content validity, construct validity, discriminant validity of X16 balance testing scale were evaluated. Factor analysis was performed to identify alternative factor structure. The Eigenvalues of factors 1, 2, and 3 were 8.53, 1.79, and 1.21, respectively, and their cumulative contribution to the total variance reached 72.0%. These 3 factors mainly represented domains static balance, postural stability, and dynamic balance. The Cronbach alpha coefficient for the scale was 0.933. The Spearman correlation coefficients between items and its corresponding domains were ranged from 0.538 to 0.964. The correlation coefficients between each item and its corresponding domain were higher than the coefficients between this item and other domains. With the increase of age, the scores of balance performance, domains static balance, postural stability, and dynamic balance in the elderly declined gradually (P < 0.001). With the increase of age, the proportion of the elderly with intact balance performance decreased gradually (P < 0.001). The reliability and validity of the X16 balance testing scale is both adequate and acceptable. Due to its simple and quick use features, it is practical to be used repeatedly and routinely especially in community setting and on large scale screening.

  18. Teacher Reporting Attitudes Scale (TRAS): confirmatory and exploratory factor analyses with a Malaysian sample.

    PubMed

    Choo, Wan Yuen; Walsh, Kerryann; Chinna, Karuthan; Tey, Nai Peng

    2013-01-01

    The Teacher Reporting Attitude Scale (TRAS) is a newly developed tool to assess teachers' attitudes toward reporting child abuse and neglect. This article reports on an investigation of the factor structure and psychometric properties of the short form Malay version of the TRAS. A self-report cross-sectional survey was conducted with 667 teachers in 14 randomly selected schools in Selangor state, Malaysia. Analyses were conducted in a 3-stage process using both confirmatory (stages 1 and 3) and exploratory factor analyses (stage 2) to test, modify, and confirm the underlying factor structure of the TRAS in a non-Western teacher sample. Confirmatory factor analysis did not support a 3-factor model previously reported in the original TRAS study. Exploratory factor analysis revealed an 8-item, 4-factor structure. Further confirmatory factor analysis demonstrated appropriateness of the 4-factor structure. Reliability estimates for the four factors-commitment, value, concern, and confidence-were moderate. The modified short form TRAS (Malay version) has potential to be used as a simple tool for relatively quick assessment of teachers' attitudes toward reporting child abuse and neglect. Cross-cultural differences in attitudes toward reporting may exist and the transferability of newly developed instruments to other populations should be evaluated.

  19. The Problem Behaviour Checklist: short scale to assess challenging behaviours

    PubMed Central

    Nagar, Jessica; Evans, Rosie; Oliver, Patricia; Bassett, Paul; Liedtka, Natalie; Tarabi, Aris

    2016-01-01

    Background Challenging behaviour, especially in intellectual disability, covers a wide range that is in need of further evaluation. Aims To develop a short but comprehensive instrument for all aspects of challenging behaviour. Method In the first part of a two-stage enquiry, a 28-item scale was constructed to examine the components of challenging behaviour. Following a simple factor analysis this was developed further to create a new short scale, the Problem Behaviour Checklist (PBCL). The scale was subsequently used in a randomised controlled trial and tested for interrater reliability. Scores were also compared with a standard scale, the Modified Overt Aggression Scale (MOAS). Results Seven identified factors – personal violence, violence against property, self-harm, sexually inappropriate, contrary, demanding and disappearing behaviour – were scored on a 5-point scale. A subsequent factor analysis with the second population showed demanding, violent and contrary behaviour to account for most of the variance. Interrater reliability using weighted kappa showed good agreement (0.91; 95% CI 0.83–0.99). Good agreement was also shown with scores on the MOAS and a score of 1 on the PBCL showed high sensitivity (97%) and specificity (85%) for a threshold MOASscore of 4. Conclusions The PBCL appears to be a suitable and practical scale for assessing all aspects of challenging behaviour. Declaration of interest None. Copyright and usage © 2016 The Royal College of Psychiatrists. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence. PMID:27703753

  20. Validation of the Weight Concerns Scale Applied to Brazilian University Students.

    PubMed

    Dias, Juliana Chioda Ribeiro; da Silva, Wanderson Roberto; Maroco, João; Campos, Juliana Alvares Duarte Bonini

    2015-06-01

    The aim of this study was to evaluate the validity and reliability of the Portuguese version of the Weight Concerns Scale (WCS) when applied to Brazilian university students. The scale was completed by 1084 university students from Brazilian public education institutions. A confirmatory factor analysis was conducted. The stability of the model in independent samples was assessed through multigroup analysis, and the invariance was estimated. Convergent, concurrent, divergent, and criterion validities as well as internal consistency were estimated. Results indicated that the one-factor model presented an adequate fit to the sample and values of convergent validity. The concurrent validity with the Body Shape Questionnaire and divergent validity with the Maslach Burnout Inventory for Students were adequate. Internal consistency was adequate, and the factorial structure was invariant in independent subsamples. The results present a simple and short instrument capable of precisely and accurately assessing concerns with weight among Brazilian university students. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Strengths use as a secret of happiness: Another dimension of visually impaired individuals' psychological state.

    PubMed

    Matsuguma, Shinichiro; Kawashima, Motoko; Negishi, Kazuno; Sano, Fumiya; Mimura, Masaru; Tsubota, Kazuo

    2018-01-01

    It is well recognized that visual impairments (VI) worsen individuals' mental condition. However, little is known about the positive aspects including subjective happiness, positive emotions, and strengths. Therefore, the purpose of this study was to investigate the positive aspects of persons with VI including their subjective happiness, positive emotions, and strengths use. Positive aspects of persons with VI were measured using the Subjective Happiness Scale (SHS), the Scale of Positive and Negative Experience-Balance (SPANE-B), and the Strengths Use Scale (SUS). A cross-sectional analysis was utilized to examine personal information in a Tokyo sample (N = 44). We used a simple regression analysis and found significant relationships between the SHS or SPANE-B and SUS; on the contrary, VI-related variables were not correlated with them. A multiple regression analysis confirmed that SUS was a significant factor associated with both the SHS and SPANE-B. Strengths use might be a possible protective factor from the negative effects of VI.

  2. Density scaling for multiplets

    NASA Astrophysics Data System (ADS)

    Nagy, Á.

    2011-02-01

    Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.

  3. HOW GALACTIC ENVIRONMENT REGULATES STAR FORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meidt, Sharon E.

    2016-02-10

    In a new simple model I reconcile two contradictory views on the factors that determine the rate at which molecular clouds form stars—internal structure versus external, environmental influences—providing a unified picture for the regulation of star formation in galaxies. In the presence of external pressure, the pressure gradient set up within a self-gravitating turbulent (isothermal) cloud leads to a non-uniform density distribution. Thus the local environment of a cloud influences its internal structure. In the simple equilibrium model, the fraction of gas at high density in the cloud interior is determined simply by the cloud surface density, which is itselfmore » inherited from the pressure in the immediate surroundings. This idea is tested using measurements of the properties of local clouds, which are found to show remarkable agreement with the simple equilibrium model. The model also naturally predicts the star formation relation observed on cloud scales and at the same time provides a mapping between this relation and the closer-to-linear molecular star formation relation measured on larger scales in galaxies. The key is that pressure regulates not only the molecular content of the ISM but also the cloud surface density. I provide a straightforward prescription for the pressure regulation of star formation that can be directly implemented in numerical models. Predictions for the dense gas fraction and star formation efficiency measured on large-scales within galaxies are also presented, establishing the basis for a new picture of star formation regulated by galactic environment.« less

  4. Groundwater development stress: Global-scale indices compared to regional modeling

    USGS Publications Warehouse

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  5. NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design

    USGS Publications Warehouse

    Borcherdt, Roger D.

    2015-01-01

    Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.

  6. The evolution of the small x gluon TMD

    NASA Astrophysics Data System (ADS)

    Zhou, Jian

    2016-06-01

    We study the evolution of the small x gluon transverse momentum dependent (TMD) distribution in the dilute limit. The calculation has been carried out in the Ji-Ma-Yuan scheme using a simple quark target model. As expected, we find that the resulting small x gluon TMD simultaneously satisfies both the Collins-Soper (CS) evolution equation and the Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution equation. We thus confirmed the earlier finding that the high energy factorization (HEF) and the TMD factorization should be jointly employed to resum the different type large logarithms in a process where three relevant scales are well separated.

  7. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  8. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1965-01-01

    Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.

  9. Confirmatory Factor Analysis of the Malay Version of the Confusion, Hubbub and Order Scale (CHAOS-6) among Myocardial Infarction Survivors in a Malaysian Cardiac Healthcare Facility

    PubMed Central

    Ganasegeran, Kurubaran; Selvaraj, Kamaraj; Rashid, Abdul

    2017-01-01

    Background The six item Confusion, Hubbub and Order Scale (CHAOS-6) has been validated as a reliable tool to measure levels of household disorder. We aimed to investigate the goodness of fit and reliability of a new Malay version of the CHAOS-6. Methods The original English version of the CHAOS-6 underwent forward-backward translation into the Malay language. The finalised Malay version was administered to 105 myocardial infarction survivors in a Malaysian cardiac health facility. We performed confirmatory factor analyses (CFAs) using structural equation modelling. A path diagram and fit statistics were yielded to determine the Malay version’s validity. Composite reliability was tested to determine the scale’s reliability. Results All 105 myocardial infarction survivors participated in the study. The CFA yielded a six-item, one-factor model with excellent fit statistics. Composite reliability for the single factor CHAOS-6 was 0.65, confirming that the scale is reliable for Malay speakers. Conclusion The Malay version of the CHAOS-6 was reliable and showed the best fit statistics for our study sample. We thus offer a simple, brief, validated, reliable and novel instrument to measure chaos, the Skala Kecelaruan, Keriuhan & Tertib Terubahsuai (CHAOS-6), for the Malaysian population. PMID:28951688

  10. The Swedish version of the multidimensional scale of perceived social support (MSPSS)--a psychometric evaluation study in women with hirsutism and nursing students.

    PubMed

    Ekbäck, Maria; Benzein, Eva; Lindberg, Magnus; Arestedt, Kristofer

    2013-10-10

    The Multidimensional Scale of Perceived Social Support (MSPSS) is a short instrument, developed to assess perceived social support. The original English version has been widely used. The original scale has demonstrated satisfactory psychometric properties in different settings, but no validated Swedish version has been available. The aim was therefore to translate, adapt and psychometrically evaluate the Multidimensional Scale of Perceived Social Support for use in a Swedish context. In total 281 participants accepted to join the study, a main sample of 127 women with hirsutism and a reference sample of 154 nursing students. The MSPSS was translated and culturally adapted according to the rigorous official process approved by WHO. The psychometric evaluation included item analysis, evaluation of factor structure, known-group validity, internal consistency and reproducibility. The original three-factor structure was reproduced in the main sample of women with hirsutism. An equivalent factor structure was demonstrated in a cross-validation, based on the reference sample of nursing students. Known-group validity was supported and internal consistency was good for all scales (α = 0.91-0.95). The test-retest showed acceptable to very good reproducibility for the items (κw = 0.58-0.85) and the scales (ICC = 0.89-0.92; CCC = 0.89-0.92). The Swedish version of the MSPSS is a multidimensional scale with sound psychometric properties in the present study sample. The simple and short format makes it a useful tool for measuring perceived social support.

  11. Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis.

    PubMed

    Williams, Alex H; Kim, Tony Hyun; Wang, Forea; Vyas, Saurabh; Ryu, Stephen I; Shenoy, Krishna V; Schnitzer, Mark; Kolda, Tamara G; Ganguli, Surya

    2018-06-27

    Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. An ultrasensitive strain sensor with a wide strain range based on graphene armour scales.

    PubMed

    Yang, Yi-Fan; Tao, Lu-Qi; Pang, Yu; Tian, He; Ju, Zhen-Yi; Wu, Xiao-Ming; Yang, Yi; Ren, Tian-Ling

    2018-06-12

    An ultrasensitive strain sensor with a wide strain range based on graphene armour scales is demonstrated in this paper. The sensor shows an ultra-high gauge factor (GF, up to 1054) and a wide strain range (ε = 26%), both of which present an advantage compared to most other flexible sensors. Moreover, the sensor is developed by a simple fabrication process. Due to the excellent performance, this strain sensor can meet the demands of subtle, large and complex human motion monitoring, which indicates its tremendous application potential in health monitoring, mechanical control, real-time motion monitoring and so on.

  13. A comprehensive surface-groundwater flow model

    NASA Astrophysics Data System (ADS)

    Arnold, Jeffrey G.; Allen, Peter M.; Bernhardt, Gilbert

    1993-02-01

    In this study, a simple groundwater flow and height model was added to an existing basin-scale surface water model. The linked model is: (1) watershed scale, allowing the basin to be subdivided; (2) designed to accept readily available inputs to allow general use over large regions; (3) continuous in time to allow simulation of land management, including such factors as climate and vegetation changes, pond and reservoir management, groundwater withdrawals, and stream and reservoir withdrawals. The model is described, and is validated on a 471 km 2 watershed near Waco, Texas. This linked model should provide a comprehensive tool for water resource managers in development and planning.

  14. Calculation of stochastic broadening due to low mn magnetic perturbation in the simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2009-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007)]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. Action-angle coordinates for simple map cannot be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories cannot cross separatrix [op cit]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to the low mn magnetic perturbation with mode numbers m=1, and n=±1. The width of stochastic layer near the X-point scales as 0.63 power of the amplitude δ of low mn perturbation, toroidal flux loss scales as 1.16 power of δ, and poloidal flux loss scales as 1.26 power of δ. Scaling of width deviates from Boozer-Rechester scaling by 26% [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

  15. Connecting the molecular scale to the continuum scale for diffusion processes in smectite-rich porous media.

    PubMed

    Bourg, Ian C; Sposito, Garrison

    2010-03-15

    In this paper, we address the manner in which the continuum-scale diffusive properties of smectite-rich porous media arise from their molecular- and pore-scale features. Our starting point is a successful model of the continuum-scale apparent diffusion coefficient for water tracers and cations, which decomposes it as a sum of pore-scale terms describing diffusion in macropore and interlayer "compartments." We then apply molecular dynamics (MD) simulations to determine molecular-scale diffusion coefficients D(interlayer) of water tracers and representative cations (Na(+), Cs(+), Sr(2+)) in Na-smectite interlayers. We find that a remarkably simple expression relates D(interlayer) to the pore-scale parameter δ(nanopore) ≤ 1, a constrictivity factor that accounts for the lower mobility in interlayers as compared to macropores: δ(nanopore) = D(interlayer)/D(0), where D(0) is the diffusion coefficient in bulk liquid water. Using this scaling expression, we can accurately predict the apparent diffusion coefficients of tracers H(2)0, Na(+), Sr(2+), and Cs(+) in compacted Na-smectite-rich materials.

  16. The Theory and Practice of Bayesian Image Labeling

    DTIC Science & Technology

    1988-08-01

    simple. The intensity images are the results of many confounding factors - lighting, surface geometry , surface reflectance, and camera characteristics...related through the geometry of the surfaces in view. They are conditionally independent in the following sense: P (g,O Ifp) = P (g Ifl) P(O If,). (6.6a...different spatial resolution and projection geometry , or using DOG-type filters of various scales. We believe that the success of visual integration at

  17. Development of a patient reported outcome scale for fatigue in multiple sclerosis: The Neurological Fatigue Index (NFI-MS)

    PubMed Central

    2010-01-01

    Background Fatigue is a common and debilitating symptom in multiple sclerosis (MS). Best-practice guidelines suggest that health services should repeatedly assess fatigue in persons with MS. Several fatigue scales are available but concern has been expressed about their validity. The objective of this study was to examine the reliability and validity of a new scale for MS fatigue, the Neurological Fatigue Index (NFI-MS). Methods Qualitative analysis of 40 MS patient interviews had previously contributed to a coherent definition of fatigue, and a potential 52 item set representing the salient themes. A draft questionnaire was mailed out to 1223 people with MS, and the resulting data subjected to both factor and Rasch analysis. Results Data from 635 (51.9% response) respondents were split randomly into an 'evaluation' and 'validation' sample. Exploratory factor analysis identified four potential subscales: 'physical', 'cognitive', 'relief by diurnal sleep or rest' and 'abnormal nocturnal sleep and sleepiness'. Rasch analysis led to further item reduction and the generation of a Summary scale comprising items from the Physical and Cognitive subscales. The scales were shown to fit Rasch model expectations, across both the evaluation and validation samples. Conclusion A simple 10-item Summary scale, together with scales measuring the physical and cognitive components of fatigue, were validated for MS fatigue. PMID:20152031

  18. A simple way to synthesize large-scale Cu2O/Ag nanoflowers for ultrasensitive surface-enhanced Raman scattering detection

    NASA Astrophysics Data System (ADS)

    Zou, Junyan; Song, Weijia; Xie, Weiguang; Huang, Bo; Yang, Huidong; Luo, Zhi

    2018-03-01

    Here, we report a simple strategy to prepare highly sensitive surface-enhanced Raman spectroscopy (SERS) substrates based on Ag decorated Cu2O nanoparticles by combining two common techniques, viz, thermal oxidation growth of Cu2O nanoparticles and magnetron sputtering fabrication of a Ag nanoparticle film. Methylene blue is used as the Raman analyte for the SERS study, and the substrates fabricated under optimized conditions have very good sensitivity (analytical enhancement factor ˜108), stability, and reproducibility. A linear dependence of the SERS intensities with the concentration was obtained with an R 2 value >0.9. These excellent properties indicate that the substrate has great potential in the detection of biological and chemical substances.

  19. Thai venous stroke prognostic score: TV-SPSS.

    PubMed

    Poungvarin, Niphon; Prayoonwiwat, Naraporn; Ratanakorn, Disya; Towanabut, Somchai; Tantirittisak, Tassanee; Suwanwela, Nijasri; Phanthumchinda, Kamman; Tiamkoa, Somsak; Chankrachang, Siwaporn; Nidhinandana, Samart; Laptikultham, Somsak; Limsoontarakul, Sansern; Udomphanthuruk, Suthipol

    2009-11-01

    Prognosis of cerebral venous sinus thrombosis (CVST) has never been studied in Thailand. A simple prognostic score to predict poor prognosis of CVST has also never been reported. The authors are aiming to establish a simple and reliable prognostic score for this condition. The medical records of CVST patients from eight neurological training centers in Thailand who received between April 1993 and September 2005 were reviewed as part of this retrospective study. Clinical features included headache, seizure, stroke risk factors, Glasgow coma scale (GCS), blood pressure on arrival, papilledema, hemiparesis, meningeal irritation sign, location of occluded venous sinuses, hemorrhagic infarction, cerebrospinal fluid opening pressure, treatment options, length of stay, and other complications were analyzed to determine the outcome using modified Rankin scale (mRS). Poor prognosis (defined as mRS of 3-6) was determined on the discharge date. One hundred ninety four patients' records, 127 females (65.5%) and mean age of 36.6 +/- 14.4 years, were analyzed Fifty-one patients (26.3%) were in the poor outcome group (mRS 3-6). Overall mortality was 8.4%. Univariate analysis and then multivariate analysis using SPSS version 11.5 revealed only four statistically significant predictors influencing outcome of CVST They were underlying malignancy, low GCS, presence of hemorrhagic infarction (for poor outcome), and involvement of lateral sinus (for good outcome). Thai venous stroke prognostic score (TV-SPSS) was derived from these four factors using a multiple logistic model. A simple and pragmatic prognostic score for CVST outcome has been developed with high sensitivity (93%), yet low specificity (33%). The next study should focus on the validation of this score in other prospective populations.

  20. Calibration of Gimbaled Platforms: The Solar Dynamics Observatory High Gain Antennas

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.

    2006-01-01

    Simple parameterization of gimbaled platform pointing produces a complete set of 13 calibration parameters-9 misalignment angles, 2 scale factors and 2 biases. By modifying the parameter representation, redundancy can be eliminated and a minimum set of 9 independent parameters defined. These consist of 5 misalignment angles, 2 scale factors, and 2 biases. Of these, only 4 misalignment angles and 2 biases are significant for the Solar Dynamics Observatory (SDO) High Gain Antennas (HGAs). An algorithm to determine these parameters after launch has been developed and tested with simulated SDO data. The algorithm consists of a direct minimization of the root-sum-square of the differences between expected power and measured power. The results show that sufficient parameter accuracy can be attained even when time-dependent thermal distortions are present, if measurements from a pattern of intentional offset pointing positions is included.

  1. Scaling images using their background ratio. An application in statistical comparisons of images.

    PubMed

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-06-07

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases.

  2. Cross-scale morphology

    USGS Publications Warehouse

    Allen, Craig R.; Holling, Crawford S.; Garmestani, Ahjond S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.

    2013-01-01

    The scaling of physical, biological, ecological and social phenomena is a major focus of efforts to develop simple representations of complex systems. Much of the attention has been on discovering universal scaling laws that emerge from simple physical and geometric processes. However, there are regular patterns of departures both from those scaling laws and from continuous distributions of attributes of systems. Those departures often demonstrate the development of self-organized interactions between living systems and physical processes over narrower ranges of scale.

  3. Accelerating universe with time variation of G and Λ

    NASA Astrophysics Data System (ADS)

    Darabi, F.

    2012-03-01

    We study a gravitational model in which scale transformations play the key role in obtaining dynamical G and Λ. We take a non-scale invariant gravitational action with a cosmological constant and a gravitational coupling constant. Then, by a scale transformation, through a dilaton field, we obtain a new action containing cosmological and gravitational coupling terms which are dynamically dependent on the dilaton field with Higgs type potential. The vacuum expectation value of this dilaton field, through spontaneous symmetry breaking on the basis of anthropic principle, determines the time variations of G and Λ. The relevance of these time variations to the current acceleration of the universe, coincidence problem, Mach's cosmological coincidence and those problems of standard cosmology addressed by inflationary models, are discussed. The current acceleration of the universe is shown to be a result of phase transition from radiation toward matter dominated eras. No real coincidence problem between matter and vacuum energy densities exists in this model and this apparent coincidence together with Mach's cosmological coincidence are shown to be simple consequences of a new kind of scale factor dependence of the energy momentum density as ρ˜ a -4. This model also provides the possibility for a super fast expansion of the scale factor at very early universe by introducing exotic type matter like cosmic strings.

  4. Evolution of cooperation on complex networks with synergistic and discounted group interactions

    NASA Astrophysics Data System (ADS)

    Zhou, Lei; Li, Aming; Wang, Long

    2015-06-01

    In the real world individuals often engage in group interactions and their payoffs are determined by many factors, including the typical nonlinear interactions, i.e., synergy and discounting. Previous literatures assume that individual payoffs are either synergistically enhanced or discounted with the additional cooperators. Such settings ignore the interplay of these two factors, which is in sharp contrast with the fact that they ubiquitously coexist. Here we investigate how the coexistence and periodical switching of synergistic and discounted group interactions affect the evolution of cooperation on various complex networks. We show that scale-free networks facilitate the emergence of cooperation in terms of fixation probability for group interactions. With nonlinear interactions the heterogeneity of the degree acts as a double-edged sword: below the neutral drift it is the best for cooperation while above the neutral drift it instead provides the least opportunity for cooperators to be fixed. The advantages of the heterogeneity fade as interactive attributes switch between synergy and discounting, which suggests that the heterogeneity of population structures cannot favor cooperators in group interactions even with simple nonlinear interactions. Nonetheless, scale-free networks always guarantee cooperators the fastest rate of fixation. Our work implies that even very simple nonlinear group interactions could greatly shape the fixation probability and fixation time of cooperators in structured populations indicated by complex networks.

  5. A simple model for prediction postpartum PTSD in high-risk pregnancies.

    PubMed

    Shlomi Polachek, Inbal; Dulitzky, Mordechai; Margolis-Dorfman, Lilia; Simchen, Michal J

    2016-06-01

    This study aimed to examine the prevalence and possible antepartum risk factors of complete and partial post-traumatic stress disorder (PTSD) among women with complicated pregnancies and to define a predictive model for postpartum PTSD in this population. Women attending the high-risk pregnancy outpatient clinics at Sheba Medical Center completed the Edinburgh Postnatal Depression Scale (EPDS) and a questionnaire regarding demographic variables, history of psychological and psychiatric treatment, previous trauma, previous childbirth, current pregnancy medical and emotional complications, fears from childbirth, and expected pain. One month after delivery, women were requested to repeat the EPDS and complete the Post-traumatic Stress Diagnostic Scale (PDS) via telephone interview. The prevalence rates of postpartum PTSD (9.9 %) and partial PTSD (11.9 %) were relatively high. PTSD and partial PTSD were associated with sadness or anxiety during past pregnancy or childbirth, previous very difficult birth experiences, preference for cesarean section in future childbirth, emotional crises during pregnancy, increased fear of childbirth, higher expected intensity of pain, and depression during pregnancy. We created a prediction model for postpartum PTSD which shows a linear growth in the probability for developing postpartum PTSD when summing these seven antenatal risk factors. Postpartum PTSD is extremely prevalent after complicated pregnancies. A simple questionnaire may aid in identifying at-risk women before childbirth. This presents a potential for preventing or minimizing postpartum PTSD in this population.

  6. Purification and Autoactivation Method for Recombinant Coagulation Factor VII.

    PubMed

    Granovski, Vladimir; Freitas, Marcela C C; Abreu-Neto, Mario Soares; Covas, Dimas T

    2018-01-01

    Recombinant coagulation factor VII is a very important and complex protein employed for treatment of hemophiliac patients (hemophilia A/B) who develop inhibitors antibodies to conventional treatments (FVIII and FIX). The rFVII is a glycosylated molecule and circulates in plasma as zymogen of 50 kDa. When activated the molecule is cleaved to 20-30 kDa and has a half-life of about 3 h, needing to be processed fast and efficiently until freeze-drying. Here, we describe a very simple and fast purification sequence for rFVII using affinity FVII Select resin and a dialysis system that can be easily scaled up.

  7. Methodological Issues in Questionnaire Design.

    PubMed

    Song, Youngshin; Son, Youn Jung; Oh, Doonam

    2015-06-01

    The process of designing a questionnaire is complicated. Many questionnaires on nursing phenomena have been developed and used by nursing researchers. The purpose of this paper was to discuss questionnaire design and factors that should be considered when using existing scales. Methodological issues were discussed, such as factors in the design of questions, steps in developing questionnaires, wording and formatting methods for items, and administrations methods. How to use existing scales, how to facilitate cultural adaptation, and how to prevent socially desirable responding were discussed. Moreover, the triangulation method in questionnaire development was introduced. Steps were recommended for designing questions such as appropriately operationalizing key concepts for the target population, clearly formatting response options, generating items and confirming final items through face or content validity, sufficiently piloting the questionnaire using item analysis, demonstrating reliability and validity, finalizing the scale, and training the administrator. Psychometric properties and cultural equivalence should be evaluated prior to administration when using an existing questionnaire and performing cultural adaptation. In the context of well-defined nursing phenomena, logical and systematic methods will contribute to the development of simple and precise questionnaires.

  8. Measuring epistemic curiosity and its diversive and specific components.

    PubMed

    Litman, Jordan A; Spielberger, Charles D

    2003-02-01

    A questionnaire constructed to assess epistemic curiosity (EC) and perceptual curiosity (PC) curiosity was administered to 739 undergraduates (546 women, 193 men) ranging in age from 18 to 65. The study participants also responded to the trait anxiety, anger, depression, and curiosity scales of the State-Trait Personality Inventory (STPI; Spielberger et al., 1979) and selected subscales of the Sensation Seeking (SSS; Zuckerman, Kolin, Price, & Zoob, 1964) and Novelty Experiencing (NES; Pearson, 1970) scales. Factor analyses of the curiosity items with oblique rotation identified EC and PC factors with clear simple structure. Subsequent analyses of the EC items provided the basis for developing an EC scale, with Diversive and Specific Curiosity subscales. Moderately high correlations of the EC scale and subscales with other measures of curiosity provided strong evidence of convergent validity. Divergent validity was demonstrated by minimal correlations with trait anxiety and the sensation-seeking measures, and essentially zero correlations with the STPI trait anger and depression scales. Male participants had significantly higher scores on the EC scale and the NES External Cognition subscale (effect sizes of r =.16 and.21, respectively), indicating that they were more interested than female participants in solving problems and discovering how things work. Male participants also scored significantly higher than female participants on the SSS Thrill-and-Adventure and NES External Sensation subscales (r =.14 and.22, respectively), suggesting that they were more likely to engage in sensation-seeking activities.

  9. Complexity-aware simple modeling.

    PubMed

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Scaling of energy absorbing composite plates

    NASA Technical Reports Server (NTRS)

    Jackson, Karen; Morton, John; Traffanstedt, Catherine; Boitnott, Richard

    1992-01-01

    The energy absorption response and crushing characteristics of geometrically scaled graphite-Kevlar epoxy composite plates were investigated. Three different trigger mechanisms including chamfer, notch, and steeple geometries were incorporated into the plate specimens to initiate crushing. Sustained crushing was achieved with a simple test fixture which provided lateral support to prevent global buckling. Values of specific sustained crushing stress (SSCS) were obtained which were comparable to values reported for tube specimens from previously published data. Two sizes of hybrid plates were fabricated; a baseline or model plate, and a full-scale plate with in-plane dimensions scaled by a factor of two. The thickness dimension of the full-scale plates was increased using two different techniques; the ply-level method in which each ply orientation in the baseline laminate stacking sequence is doubled, and the sublaminate technique in which the baseline laminate stacking sequence is repeated as a group. Results indicated that the SSCS is independent of trigger mechanism geometry. However, a reduction in the SSCS of 10-25 percent was observed for the full-scale plates as compared with the baseline specimens, indicating a scaling effect in the crushing response.

  11. Scaling of energy absorbing composite plates

    NASA Astrophysics Data System (ADS)

    Jackson, Karen; Morton, John; Traffanstedt, Catherine; Boitnott, Richard

    The energy absorption response and crushing characteristics of geometrically scaled graphite-Kevlar epoxy composite plates were investigated. Three different trigger mechanisms including chamfer, notch, and steeple geometries were incorporated into the plate specimens to initiate crushing. Sustained crushing was achieved with a simple test fixture which provided lateral support to prevent global buckling. Values of specific sustained crushing stress (SSCS) were obtained which were comparable to values reported for tube specimens from previously published data. Two sizes of hybrid plates were fabricated; a baseline or model plate, and a full-scale plate with in-plane dimensions scaled by a factor of two. The thickness dimension of the full-scale plates was increased using two different techniques; the ply-level method in which each ply orientation in the baseline laminate stacking sequence is doubled, and the sublaminate technique in which the baseline laminate stacking sequence is repeated as a group. Results indicated that the SSCS is independent of trigger mechanism geometry. However, a reduction in the SSCS of 10-25 percent was observed for the full-scale plates as compared with the baseline specimens, indicating a scaling effect in the crushing response.

  12. Optical zero-differential pressure switch and its evaluation in a multiple pressure measuring system

    NASA Technical Reports Server (NTRS)

    Powell, J. A.

    1977-01-01

    The design of a clamped-diaphragm pressure switch is described in which diaphragm motion is detected by a simple fiber-optic displacement sensor. The switch was evaluated in a pressure measurement system where it detected the zero crossing of the differential pressure between a static test pressure and a tank pressure that was periodically ramped from near zero to fullscale gage pressure. With a ramping frequency of 1 hertz and a full-scale tank pressure of 69 N/sq cm gage (100 psig), the switch delay was as long as 2 milliseconds. Pressure measurement accuracies were 0.25 to 0.75 percent of full scale. Factors affecting switch performance are also discussed.

  13. Implementing the DC Mode in Cosmological Simulations with Supercomoving Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y; Kravtsov, Andrey V; Rudd, Douglas H

    2011-06-02

    As emphasized by previous studies, proper treatment of the density fluctuation on the fundamental scale of a cosmological simulation volume - the 'DC mode' - is critical for accurate modeling of spatial correlations on scales ~> 10% of simulation box size. We provide further illustration of the effects of the DC mode on the abundance of halos in small boxes and show that it is straightforward to incorporate this mode in cosmological codes that use the 'supercomoving' variables. The equations governing evolution of dark matter and baryons recast with these variables are particularly simple and include the expansion factor, andmore » hence the effect of the DC mode, explicitly only in the Poisson equation.« less

  14. Finite volume solution of the compressible boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Loyd, B.; Murman, E. M.

    1986-01-01

    A box-type finite volume discretization is applied to the integral form of the compressible boundary layer equations. Boundary layer scaling is introduced through the grid construction: streamwise grid lines follow eta = y/h = const., where y is the normal coordinate and h(x) is a scale factor proportional to the boundary layer thickness. With this grid, similarity can be applied explicity to calculate initial conditions. The finite volume method preserves the physical transparency of the integral equations in the discrete approximation. The resulting scheme is accurate, efficient, and conceptually simple. Computations for similar and non-similar flows show excellent agreement with tabulated results, solutions computed with Keller's Box scheme, and experimental data.

  15. The PHOBOS perspective on discoveries at RHIC

    NASA Astrophysics Data System (ADS)

    Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Becker, B.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Chai, Z.; Decowski, M. P.; García, E.; Gburek, T.; George, N. K.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Harrington, A. S.; Hauer, M.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lee, J. W.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Seals, H.; Sedykh, I.; Skulski, W.; Smith, C. E.; Stankiewicz, M. A.; Steinberg, P.; Stephans, G. S. F.; Sukhanov, A.; Tang, J.-L.; Tonjes, M. B.; Trzupek, A.; Vale, C. M.; van Nieuwenhuizen, G. J.; Vaurynovich, S. S.; Verdier, R.; Veres, G. I.; Wenger, E.; Wolfs, F. L. H.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.; Zhang, J.; Phobos Collaboration

    2005-08-01

    This paper describes the conclusions that can be drawn from the data taken thus far with the PHOBOS detector at RHIC. In the most central Au + Au collisions at the highest beam energy, evidence is found for the formation of a very high energy density system whose description in terms of simple hadronic degrees of freedom is inappropriate. Furthermore, the constituents of this novel system are found to undergo a significant level of interaction. The properties of particle production at RHIC energies are shown to follow a number of simple scaling behaviors, some of which continue trends found at lower energies or in simpler systems. As a function of centrality, the total number of charged particles scales with the number of participating nucleons. When comparing Au + Au at different centralities, the dependence of the yield on the number of participants at higher p ( ˜4 GeV/c) is very similar to that at low transverse momentum. The measured values of charged particle pseudorapidity density and elliptic flow were found to be independent of energy over a broad range of pseudorapidities when effectively viewed in the rest frame of one of the colliding nuclei, a property we describe as "extended longitudinal scaling". Finally, the centrality and energy dependences of several observables were found to factorize to a surprising degree.

  16. Spatial scaling of net primary productivity using subpixel landcover information

    NASA Astrophysics Data System (ADS)

    Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.

    2008-10-01

    Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.

  17. Scaling of Magnetic Reconnection in Relativistic Collisionless Pair Plasmas

    NASA Technical Reports Server (NTRS)

    Liu, Yi-Hsin; Guo, Fan; Daughton, William; Li, Hui; Hesse, Michael

    2015-01-01

    Using fully kinetic simulations, we study the scaling of the inflow speed of collisionless magnetic reconnection in electron-positron plasmas from the non-relativistic to ultra-relativistic limit. In the anti-parallel configuration, the inflow speed increases with the upstream magnetization parameter sigma and approaches the speed of light when sigma is greater than O(100), leading to an enhanced reconnection rate. In all regimes, the divergence of the pressure tensor is the dominant term responsible for breaking the frozen-in condition at the x-line. The observed scaling agrees well with a simple model that accounts for the Lorentz contraction of the plasma passing through the diffusion region. The results demonstrate that the aspect ratio of the diffusion region, modified by the compression factor of proper density, remains approximately 0.1 in both the non-relativistic and relativistic limits.

  18. Bunching phase and constraints on echo enabled harmonic generation

    NASA Astrophysics Data System (ADS)

    Hemsing, E.

    2018-05-01

    A simple mathematical description is developed for the bunching spectrum in echo enabled harmonic generation (EEHG) that incorporates the effect of additional electron beam energy modulations. Under common assumptions, they are shown to contribute purely through the phase of the longitudinal bunching factor, which allows the spectral moments of the bunching to be calculated directly from the known energy modulations. In particular, the second moment (spectral bandwidth) serves as simple constraint on the amplitude of the energy modulations to maintain a transform-limited seed. We show that, in general, the impact on the spectrum of energy distortions that develop between the EEHG chicanes scales like the harmonic number compared to distortions that occur upstream. This may limit the parameters that will allow EEHG to reach short wavelengths in high brightness FELs.

  19. Development of the competency scale for primary care managers in Thailand: Scale development.

    PubMed

    Kitreerawutiwong, Keerati; Sriruecha, Chanaphol; Laohasiriwong, Wongsa

    2015-12-09

    The complexity of the primary care system requires a competent manager to achieve high-quality healthcare. The existing literature in the field yields little evidence of the tools to assess the competency of primary care administrators. This study aimed to develop and examine the psychometric properties of the competency scale for primary care managers in Thailand. The scale was developed using in-depth interviews and focus group discussions among policy makers, managers, practitioners, village health volunteers, and clients. The specific dimensions were extracted from 35 participants. 123 items were generated from the evidence and qualitative data. Content validity was established through the evaluation of seven experts and the original 123 items were reduced to 84 items. The pilot testing was conducted on a simple random sample of 487 primary care managers. Item analysis, reliability testing, and exploratory factor analysis were applied to establish the scale's reliability and construct validity. Exploratory factor analysis identified nine dimensions with 48 items using a five-point Likert scale. Each dimension accounted for greater than 58.61% of the total variance. The scale had strong content validity (Indices = 0.85). Each dimension of Cronbach's alpha ranged from 0.70 to 0.88. Based on these analyses, this instrument demonstrated sound psychometric properties and therefore is considered an effective tool for assessment of the primary care manager competencies. The results can be used to improve competency requirements of primary care managers, with implications for health service management workforce development.

  20. Q-factor control of multilayer micromembrane using PZT composite material

    NASA Astrophysics Data System (ADS)

    Čekas, Elingas; Janušas, Giedrius; Palevicius, Arvydas; Janušas, Tomas; Ciganas, Justas

    2018-02-01

    Cantilever and membrane based sensors, which are capable of providing accurate detection of target analytes have been always an important research topic of medical diagnostics, food testing, and environmental monitoring fields. Here, the mechanical detection is achieved by micro- and nano-scale cantilevers for stress sensing and mass sensing, or micro- and nano-scale plates or membranes. High sensitivity is a major issue for the active element and it could be achieved via increased Q-factor. The ability to control the Q factor expands the range of application of the device and allows to achieve more accurate results. The aim of this paper is to investigate the mechanical and electrical properties, as well as, the ability to control the Q factor of the membrane with PZT nanocomposite. This multilayered membrane was formatted using the n-type <100> silicon substrate by implementing the Low Pressure Chemical Vapor Deposition (LPCVD), photolithography by using photomask with defined dimensions, deep etching, and e-beam evaporation techniques. Dynamic and electrical characteristics of the membrane were numerically investigated using COMSOL Multiphysics software. The use of the multilayered membrane can range from simple monitoring of particles concentration in a closed environment to inspecting glucose levels in human fluids (blood, tears, sweat, etc.).

  1. Impact of viscous droplets on different wettable surfaces: Impact phenomena, the maximum spreading factor, spreading time and post-impact oscillation.

    PubMed

    Lin, Shiji; Zhao, Binyu; Zou, Song; Guo, Jianwei; Wei, Zheng; Chen, Longquan

    2018-04-15

    In this paper, we experimentally investigated the impact dynamics of different viscous droplets on solid surfaces with diverse wettabilities. We show that the outcome of an impinging droplet is dependent on the physical property of the droplet and the wettability of the surface. Whereas only deposition was observed on lyophilic surfaces, more impact phenomena were identified on lyophobic and superlyophobic surfaces. It was found that none of the existing theoretical models can well describe the maximum spreading factor, revealing the complexity of the droplet impact dynamics and suggesting that more factors need to be considered in the theory. By using the modified capillary-inertial time, which considers the effects of liquid viscosity and surface wettability on droplet spreading, a universal scaling law describing the spreading time was obtained. Finally, we analyzed the post-impact droplet oscillation with the theory for damped harmonic oscillators and interpreted the effects of liquid viscosity and surface wettability on the oscillation by simple scaling analyses. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Observation and modelling of urban dew

    NASA Astrophysics Data System (ADS)

    Richards, Katrina

    Despite its relevance to many aspects of urban climate and to several practical questions, urban dew has largely been ignored. Here, simple observations an out-of-doors scale model, and numerical simulation are used to investigate patterns of dewfall and surface moisture (dew + guttation) in urban environments. Observations and modelling were undertaken in Vancouver, B.C., primarily during the summers of 1993 and 1996. Surveys at several scales (0.02-25 km) show that the main controls on dew are weather, location and site configuration (geometry and surface materials). Weather effects are discussed using an empirical factor, FW . Maximum dew accumulation (up to ~ 0.2 mm per night) is seen on nights with moist air and high FW , i.e., cloudless conditions with light winds. Favoured sites are those with high Ysky and surfaces which cool rapidly after sunset, e.g., grass and well insulated roofs. A 1/8-scale model is designed, constructed, and run at an out-of-doors site to study dew patterns in an urban residential landscape which consists of house lots, a street and an open grassed park. The Internal Thermal Mass (ITM) approach is used to scale the thermal inertia of buildings. The model is validated using data from full-scale sites in Vancouver. Patterns in the model agree with those seen at the full-scale, i.e., dew distribution is governed by weather, site geometry and substrate conditions. Correlation is shown between Ysky and surface moisture accumulation. The feasibility of using a numerical model to simulate urban dew is investigated using a modified version of a rural dew model. Results for simple isolated surfaces-a deciduous tree leaf and an asphalt shingle roof-show promise, especially for built surfaces.

  3. Scale dependence of deuteron electrodisintegration

    NASA Astrophysics Data System (ADS)

    More, S. N.; Bogner, S. K.; Furnstahl, R. J.

    2017-11-01

    Background: Isolating nuclear structure properties from knock-out reactions in a process-independent manner requires a controlled factorization, which is always to some degree scale and scheme dependent. Understanding this dependence is important for robust extractions from experiment, to correctly use the structure information in other processes, and to understand the impact of approximations for both. Purpose: We seek insight into scale dependence by exploring a model calculation of deuteron electrodisintegration, which provides a simple and clean theoretical laboratory. Methods: By considering various kinematic regions of the longitudinal structure function, we can examine how the components—the initial deuteron wave function, the current operator, and the final-state interactions (FSIs)—combine at different scales. We use the similarity renormalization group to evolve each component. Results: When evolved to different resolutions, the ingredients are all modified, but how they combine depends strongly on the kinematic region. In some regions, for example, the FSIs are largely unaffected by evolution, while elsewhere FSIs are greatly reduced. For certain kinematics, the impulse approximation at a high renormalization group resolution gives an intuitive picture in terms of a one-body current breaking up a short-range correlated neutron-proton pair, although FSIs distort this simple picture. With evolution to low resolution, however, the cross section is unchanged but a very different and arguably simpler intuitive picture emerges, with the evolved current efficiently represented at low momentum through derivative expansions or low-rank singular value decompositions. Conclusions: The underlying physics of deuteron electrodisintegration is scale dependent and not just kinematics dependent. As a result, intuition about physics such as the role of short-range correlations or D -state mixing in particular kinematic regimes can be strongly scale dependent. Understanding this dependence is crucial in making use of extracted properties.

  4. Understanding the origins of uncertainty in landscape-scale variations of emissions of nitrous oxide

    NASA Astrophysics Data System (ADS)

    Milne, Alice; Haskard, Kathy; Webster, Colin; Truan, Imogen; Goulding, Keith

    2014-05-01

    Nitrous oxide is a potent greenhouse gas which is over 300 times more radiatively effective than carbon dioxide. In the UK, the agricultural sector is estimated to be responsible for over 80% of nitrous oxide emissions, with these emissions resulting from livestock and farmers adding nitrogen fertilizer to soils. For the purposes of reporting emissions to the IPCC, the estimates are calculated using simple models whereby readily-available national or international statistics are combined with IPCC default emission factors. The IPCC emission factor for direct emissions of nitrous oxide from soils has a very large uncertainty. This is primarily because the variability of nitrous oxide emissions in space is large and this results in uncertainty that may be regarded as sample noise. To both reduce uncertainty through improved modelling, and to communicate an understanding of this uncertainty, we must understand the origins of the variation. We analysed data on nitrous oxide emission rate and some other soil properties collected from a 7.5-km transect across contrasting land uses and parent materials in eastern England. We investigated the scale-dependence and spatial uniformity of the correlations between soil properties and emission rates from farm to landscape scale using wavelet analysis. The analysis revealed a complex pattern of scale-dependence. Emission rates were strongly correlated with a process-specific function of the water-filled pore space at the coarsest scale and nitrate at intermediate and coarsest scales. We also found significant correlations between pH and emission rates at the intermediate scales. The wavelet analysis showed that these correlations were not spatially uniform and that at certain scales changes in parent material coincided with significant changes in correlation. Our results indicate that, at the landscape scale, nitrate content and water-filled pore space are key soil properties for predicting nitrous oxide emissions and should therefore be incorporated into process models and emission factors for inventory calculations.

  5. Ice Accretion with Varying Surface Tension

    NASA Technical Reports Server (NTRS)

    Bilanin, Alan J.; Anderson, David N.

    1995-01-01

    During an icing encounter of an aircraft in flight, super-cooled water droplets impinging on an airfoil may splash before freezing. This paper reports tests performed to determine if this effect is significant and uses the results to develop an improved scaling method for use in icing test facilities. Simple laboratory tests showed that drops splash on impact at the Reynolds and Weber numbers typical of icing encounters. Further confirmation of droplet splash came from icing tests performed in the NaSA Lewis Icing Research Tunnel (IRT) with a surfactant added to the spray water to reduce the surface tension. The resulting ice shapes were significantly different from those formed when no surfactant was added to the water. These results suggested that the droplet Weber number must be kept constant to properly scale icing test conditions. Finally, the paper presents a Weber-number-based scaling method and reports results from scaling tests in the IRT in which model size was reduced up to a factor of 3. Scale and reference ice shapes are shown which confirm the effectiveness of this new scaling method.

  6. Does the Assessment of Recovery Capital scale reflect a single or multiple domains?

    PubMed

    Arndt, Stephan; Sahker, Ethan; Hedden, Suzy

    2017-01-01

    The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.

  7. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  8. Development of a simple measurement scale to evaluate the severity of non-specific low back pain for industrial ergonomics.

    PubMed

    Higuchi, Yoshiyuki; Izumi, Hiroyuki; Kumashiro, Mashaharu

    2010-06-01

    This study developed an assessment scale that hierarchically classifies degrees of low back pain severity. This assessment scale consists of two subscales: 1) pain intensity; 2) pain interference. First, the assessment scale devised by the authors was used to administer a self-administered questionnaire to 773 male workers in the car manufacturing industry. Subsequently, the validity of the measurement items was examined and some of them were revised. Next, the corrected low back pain scale was used in a self-administered questionnaire, the subjects of which were 5053 ordinary workers. The hierarchical validity between the measurement items was checked based on the results of Mokken Scale analysis. Finally, a low back pain assessment scale consisting of seven items was perfected. Quantitative assessment is made possible by scoring the items and low back pain severity can be classified into four hierarchical levels: none; mild; moderate; severe. STATEMENT OF RELEVANCE: The use of this scale devised by the authors allows a more detailed assessment of the degree of risk factor effect and also should prove useful both in selecting remedial measures for occupational low back pain and evaluating their efficacy.

  9. Evidence of complex contagion of information in social media: An experiment using Twitter bots.

    PubMed

    Mønsted, Bjarke; Sapieżyński, Piotr; Ferrara, Emilio; Lehmann, Sune

    2017-01-01

    It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.

  10. Hydrological and geomorphological controls of malaria transmission

    NASA Astrophysics Data System (ADS)

    Smith, M. W.; Macklin, M. G.; Thomas, C. J.

    2013-01-01

    Malaria risk is linked inextricably to the hydrological and geomorphological processes that form vector breeding sites. Yet environmental controls of malaria transmission are often represented by temperature and rainfall amounts, ignoring hydrological and geomorphological influences altogether. Continental-scale studies incorporate hydrology implicitly through simple minimum rainfall thresholds, while community-scale coupled hydrological and entomological models do not represent the actual diversity of the mosquito vector breeding sites. The greatest range of malaria transmission responses to environmental factors is observed at the catchment scale where seemingly contradictory associations between rainfall and malaria risk can be explained by hydrological and geomorphological processes that govern surface water body formation and persistence. This paper extends recent efforts to incorporate ecological factors into malaria-risk models, proposing that the same detailed representation be afforded to hydrological and, at longer timescales relevant for predictions of climate change impacts, geomorphological processes. We review existing representations of environmental controls of malaria and identify a range of hydrologically distinct vector breeding sites from existing literature. We illustrate the potential complexity of interactions among hydrology, geomorphology and vector breeding sites by classifying a range of water bodies observed in a catchment in East Africa. Crucially, the mechanisms driving surface water body formation and destruction must be considered explicitly if we are to produce dynamic spatial models of malaria risk at catchment scales.

  11. Longitudinal tests of competing factor structures for the Rosenberg Self-Esteem Scale: traits, ephemeral artifacts, and stable response styles.

    PubMed

    Marsh, Herbert W; Scalas, L Francesca; Nagengast, Benjamin

    2010-06-01

    Self-esteem, typically measured by the Rosenberg Self-Esteem Scale (RSE), is one of the most widely studied constructs in psychology. Nevertheless, there is broad agreement that a simple unidimensional factor model, consistent with the original design and typical application in applied research, does not provide an adequate explanation of RSE responses. However, there is no clear agreement about what alternative model is most appropriate-or even a clear rationale for how to test competing interpretations. Three alternative interpretations exist: (a) 2 substantively important trait factors (positive and negative self-esteem), (b) 1 trait factor and ephemeral method artifacts associated with positively or negatively worded items, or (c) 1 trait factor and stable response-style method factors associated with item wording. We have posited 8 alternative models and structural equation model tests based on longitudinal data (4 waves of data across 8 years with a large, representative sample of adolescents). Longitudinal models provide no support for the unidimensional model, undermine support for the 2-factor model, and clearly refute claims that wording effects are ephemeral, but they provide good support for models positing 1 substantive (self-esteem) factor and response-style method factors that are stable over time. This longitudinal methodological approach has not only resolved these long-standing issues in self-esteem research but also has broad applicability to most psychological assessments based on self-reports with a mix of positively and negatively worded items.

  12. Optical components damage parameters database system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong

    2012-10-01

    Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.

  13. Rearranging Pionless Effective Field Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin Savage; Silas Beane

    2001-11-19

    We point out a redundancy in the operator structure of the pionless effective field theory which dramatically simplifies computations. This redundancy is best exploited by using dibaryon fields as fundamental degrees of freedom. In turn, this suggests a new power counting scheme which sums range corrections to all orders. We explore this method with a few simple observables: the deuteron charge form factor, n p -> d gamma, and Compton scattering from the deuteron. Higher dimension operators involving electroweak gauge fields are not renormalized by the s-wave strong interactions, and therefore do not scale with inverse powers of the renormalizationmore » scale. Thus, naive dimensional analysis of these operators is sufficient to estimate their contribution to a given process.« less

  14. A rapid and robust gradient measurement technique using dynamic single-point imaging.

    PubMed

    Jang, Hyungseok; McMillan, Alan B

    2017-09-01

    We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Correcting the SIMPLE Model of Free Recall

    ERIC Educational Resources Information Center

    Lee, Michael D.; Pooley, James P.

    2013-01-01

    The scale-invariant memory, perception, and learning (SIMPLE) model developed by Brown, Neath, and Chater (2007) formalizes the theoretical idea that scale invariance is an important organizing principle across numerous cognitive domains and has made an influential contribution to the literature dealing with modeling human memory. In the context…

  16. A high-performance dual-scale porous electrode for vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    Zhou, X. L.; Zeng, Y. K.; Zhu, X. B.; Wei, L.; Zhao, T. S.

    2016-09-01

    In this work, we present a simple and cost-effective method to form a dual-scale porous electrode by KOH activation of the fibers of carbon papers. The large pores (∼10 μm), formed between carbon fibers, serve as the macroscopic pathways for high electrolyte flow rates, while the small pores (∼5 nm), formed on carbon fiber surfaces, act as active sites for rapid electrochemical reactions. It is shown that the Brunauer-Emmett-Teller specific surface area of the carbon paper is increased by a factor of 16 while maintaining the same hydraulic permeability as that of the original carbon paper electrode. We then apply the dual-scale electrode to a vanadium redox flow battery (VRFB) and demonstrate an energy efficiency ranging from 82% to 88% at current densities of 200-400 mA cm-2, which is record breaking as the highest performance of VRFB in the open literature.

  17. Characterization of a subwavelength-scale 3D void structure using the FDTD-based confocal laser scanning microscopic image mapping technique.

    PubMed

    Choi, Kyongsik; Chon, James W; Gu, Min; Lee, Byoungho

    2007-08-20

    In this paper, a simple confocal laser scanning microscopic (CLSM) image mapping technique based on the finite-difference time domain (FDTD) calculation has been proposed and evaluated for characterization of a subwavelength-scale three-dimensional (3D) void structure fabricated inside polymer matrix. The FDTD simulation method adopts a focused Gaussian beam incident wave, Berenger's perfectly matched layer absorbing boundary condition, and the angular spectrum analysis method. Through the well matched simulation and experimental results of the xz-scanned 3D void structure, we first characterize the exact position and the topological shape factor of the subwavelength-scale void structure, which was fabricated by a tightly focused ultrashort pulse laser. The proposed CLSM image mapping technique based on the FDTD can be widely applied from the 3D near-field microscopic imaging, optical trapping, and evanescent wave phenomenon to the state-of-the-art bio- and nanophotonics.

  18. Geometry and Reynolds-Number Scaling on an Iced Business-Jet Wing

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Ratvasky, Thomas P.; Thacker, Michael; Barnhart, Billy P.

    2005-01-01

    A study was conducted to develop a method to scale the effect of ice accretion on a full-scale business jet wing model to a 1/12-scale model at greatly reduced Reynolds number. Full-scale, 5/12-scale, and 1/12-scale models of identical airfoil section were used in this study. Three types of ice accretion were studied: 22.5-minute ice protection system failure shape, 2-minute initial ice roughness, and a runback shape that forms downstream of a thermal anti-ice system. The results showed that the 22.5-minute failure shape could be scaled from full-scale to 1/12-scale through simple geometric scaling. The 2-minute roughness shape could be scaled by choosing an appropriate grit size. The runback ice shape exhibited greater Reynolds number effects and could not be scaled by simple geometric scaling of the ice shape.

  19. CosmoCalc: An Excel add-in for cosmogenic nuclide calculations

    NASA Astrophysics Data System (ADS)

    Vermeesch, Pieter

    2007-08-01

    As dating methods using Terrestrial Cosmogenic Nuclides (TCN) become more popular, the need arises for a general-purpose and easy-to-use data reduction software. The CosmoCalc Excel add-in calculates TCN production rate scaling factors (using Lal, Stone, Dunai, and Desilets methods); topographic, snow, and self-shielding factors; and exposure ages, erosion rates, and burial ages and visualizes the results on banana-style plots. It uses an internally consistent TCN production equation that is based on the quadruple exponential approach of Granger and Smith (2000). CosmoCalc was designed to be as user-friendly as possible. Although the user interface is extremely simple, the program is also very flexible, and nearly all default parameter values can be changed. To facilitate the comparison of different scaling factors, a set of converter tools is provided, allowing the user to easily convert cut-off rigidities to magnetic inclinations, elevations to atmospheric depths, and so forth. Because it is important to use a consistent set of scaling factors for the sample measurements and the production rate calibration sites, CosmoCalc defines the production rates implicitly, as a function of the original TCN concentrations of the calibration site. The program is best suited for 10Be, 26Al, 3He, and 21Ne calculations, although basic functionality for 36Cl and 14C is also provided. CosmoCalc can be downloaded along with a set of test data from http://cosmocalc.googlepages.com.

  20. A class of simple bouncing and late-time accelerating cosmologies in f(R) gravity

    NASA Astrophysics Data System (ADS)

    Kuiroukidis, A.

    We consider the field equations for a flat FRW cosmological model, given by Eq. (??), in an a priori generic f(R) gravity model and cast them into a, completely normalized and dimensionless, system of ODEs for the scale factor and the function f(R), with respect to the scalar curvature R. It is shown that under reasonable assumptions, namely for power-law functional form for the f(R) gravity model, one can produce simple analytical and numerical solutions describing bouncing cosmological models where in addition there are late-time accelerating. The power-law form for the f(R) gravity model is typically considered in the literature as the most concrete, reasonable, practical and viable assumption [see S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 90 (2014) 124083, arXiv:1410.8183 [gr-qc

  1. Martian subsurface properties and crater formation processes inferred from fresh impact crater geometries

    NASA Astrophysics Data System (ADS)

    Stewart, Sarah T.; Valiant, Gregory J.

    2006-10-01

    The geometry of simple impact craters reflects the properties of the target materials, and the diverse range of fluidized morphologies observed in Martian ejecta blankets are controlled by the near-surface composition and the climate at the time of impact. Using the Mars Orbiter Laser Altimeter (MOLA) data set, quantitative information about the strength of the upper crust and the dynamics of Martian ejecta blankets may be derived from crater geometry measurements. Here, we present the results from geometrical measurements of fresh craters 3-50 km in rim diameter in selected highland (Lunae and Solis Plana) and lowland (Acidalia, Isidis, and Utopia Planitiae) terrains. We find large, resolved differences between the geometrical properties of the freshest highland and lowland craters. Simple lowland craters are 1.5-2.0 times deeper (≥5σo difference) with >50% larger cavities (≥2σo) compared to highland craters of the same diameter. Rim heights and the volume of material above the preimpact surface are slightly greater in the lowlands over most of the size range studied. The different shapes of simple highland and lowland craters indicate that the upper ˜6.5 km of the lowland study regions are significantly stronger than the upper crust of the highland plateaus. Lowland craters collapse to final volumes of 45-70% of their transient cavity volumes, while highland craters preserve only 25-50%. The effective yield strength of the upper crust in the lowland regions falls in the range of competent rock, approximately 9-12 MPa, and the highland plateaus may be weaker by a factor of 2 or more, consistent with heavily fractured Noachian layered deposits. The measured volumes of continuous ejecta blankets and uplifted surface materials exceed the predictions from standard crater scaling relationships and Maxwell's Z model of crater excavation by a factor of 3. The excess volume of fluidized ejecta blankets on Mars cannot be explained by concentration of ejecta through nonballistic emplacement processes and/or bulking. The observations require a modification of the scaling laws and are well fit using a scaling factor of ˜1.4 between the transient crater surface diameter to the final crater rim diameter and excavation flow originating from one projectile diameter depth with Z = 2.7. The refined excavation model provides the first observationally constrained set of initial parameters for study of the formation of fluidized ejecta blankets on Mars.

  2. Psychometric testing of the Caregiver Quality of Life Index-Cancer scale in an Iranian sample of family caregivers to newly diagnosed breast cancer women.

    PubMed

    Khanjari, Sedigheh; Oskouie, Fatemeh; Langius-Eklöf, Ann

    2012-02-01

    To translate and test the reliability and validity of the Persian version of the Caregiver Quality of Life Index-Cancer scale. Research across many countries has determined quality of life of cancer patients, but few attempts have been made to measure the quality of life of family caregivers of patients with breast cancer. The Caregiver Quality of Life Index-Cancer scale was developed for this purpose, but until now, it has not been translated into or tested in the Persian language. Methodological research design. After standard translation, the 35-item Caregiver Quality of Life Index-Cancer scale was administered to 166 Iranian family caregivers of patients with breast cancer. A confirmatory factor analysis was carried out using LISREL to test the scale's construct validity. Further, the internal consistency and convergent validity of the instrument were tested. For convergent validity, four instruments were used in the study: sense of coherence scale, spirituality perspective scale, health index and brief religious coping scale. The confirmatory factor analysis resulted in the same four-factor structure as the original, though, with somewhat different item loadings. The Persian version of the Caregiver Quality of Life Index-Cancer scales had satisfactory internal consistency (0·72-0·90). Tests of convergent validity showed that all hypotheses were confirmed. A hierarchical multiple regression analysis additionally confirmed the convergent validity between the total Caregiver Quality of Life Index-Cancer score and sense of coherence (β = 0·34), negative religious coping (β = -0·21), education (β = 0·24) and the more severe stage of breast cancer (β = 0·23), in total explaining 41% of the variance. The Persian version of the Caregiver Quality of Life Index-Cancer scale could be a reliable and valid measure in Iranian family caregivers of patients with breast cancer. The Persian version of the Caregiver Quality of Life Index-Cancer scale is simple to administer and will help nurses to identify the nursing needs of family caregivers. © 2011 Blackwell Publishing Ltd.

  3. Effective theory of squeezed correlation functions

    NASA Astrophysics Data System (ADS)

    Mirbabayi, Mehrdad; Simonović, Marko

    2016-03-01

    Various inflationary scenarios can often be distinguished from one another by looking at the squeezed limit behavior of correlation functions. Therefore, it is useful to have a framework designed to study this limit in a more systematic and efficient way. We propose using an expansion in terms of weakly coupled super-horizon degrees of freedom, which is argued to generically exist in a near de Sitter space-time. The modes have a simple factorized form which leads to factorization of the squeezed-limit correlation functions with power-law behavior in klong/kshort. This approach reproduces the known results in single-, quasi-single-, and multi-field inflationary models. However, it is applicable even if, unlike the above examples, the additional degrees of freedom are not weakly coupled at sub-horizon scales. Stronger results are derived in two-field (or sufficiently symmetric multi-field) inflationary models. We discuss the observability of the non-Gaussian 3-point function in the large-scale structure surveys, and argue that the squeezed limit behavior has a higher detectability chance than equilateral behavior when it scales as (klong/kshort)Δ with Δ < 1—where local non-Gaussianity corresponds to Δ = 0.

  4. Universal binding energy relations in metallic adhesion

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Smith, J. R.; Rose, J. J.

    1984-01-01

    Rose, Smith, and Ferrante have discovered scaling relations which map the adhesive binding energy calculated by Ferrante and Smith onto a single universal binding energy curve. These binding energies are calculated for all combinations of Al(111), Zn(0001), Mg(0001), and Na(110) in contact. The scaling involves normalizing the energy by the maximum binding energy and normalizing distances by a suitable combination of Thomas-Fermi screening lengths. Rose et al. have also found that the calculated cohesive energies of K, Ba, Cu, Mo, and Sm scale by similar simple relations, suggesting the universal relation may be more general than for the simple free electron metals for which it was derived. In addition, the scaling length was defined more generally in order to relate it to measurable physical properties. Further this universality can be extended to chemisorption. A simple and yet quite accurate prediction of a zero temperature equation of state (volume as a function of pressure for metals and alloys) is presented. Thermal expansion coefficients and melting temperatures are predicted by simple, analytic expressions, and results compare favorably with experiment for a broad range of metals.

  5. Depressive symptoms and cardiovascular health by the American Heart Association's definition in the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study.

    PubMed

    Kronish, Ian M; Carson, April P; Davidson, Karina W; Muntner, Paul; Safford, Monika M

    2012-01-01

    Depressive symptoms are associated with increased incident and recurrent cardiovascular events. In 2010, the American Heart Association published the Life's Simple 7, a metric for assessing cardiovascular health as measured by 4 health behaviors (smoking, physical activity, body mass index, diet) and 3 biological measures (cholesterol, blood pressure, glucose). The association between depressive symptoms and the Life's Simple 7 has not yet been explored. Data from 20,093 participants ≥45 years of age who enrolled in the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study between 2003 and 2007 and who had complete data available on Life's Simple 7 components were used for these analyses. The prevalence of ideal, intermediate, and poor health on each Life's Simple 7 component and total Life's Simple 7 scores were compared between participants with and without depressive symptoms. Depressive symptoms were measured using the 4-item Centers for Epidemiologic Studies of Depression scale. Participants with depressive symptoms were more likely to have poor levels on each of the Life's Simple 7 components other than cholesterol [adjusted prevalence ratios (95% CI): smoking 1.41 (1.29-1.55); physical activity 1.38 (1.31-1.46); body mass index 1.09 (1.04-1.15); diet 1.08 (1.06-1.10); blood pressure 1.11 (1.02-1.21); glucose 1.24 (1.09-1.41)]. There was a graded association between increasing depressive symptoms and lower total Life's Simple 7 score. Depressive symptoms are associated with worse cardiovascular health on the overall Life's Simple 7 and on individual components representing both health behaviors and biological factors.

  6. A Randomized Study Comparing the Sniffing Position with Simple Head Extension for Glottis Visualization and Difficulty in Intubation during Direct Laryngoscopy.

    PubMed

    Akhtar, Mehmooda; Ali, Zulfiqar; Hassan, Nelofar; Mehdi, Saqib; Wani, Gh Mohammad; Mir, Aabid Hussain

    2017-01-01

    Proper positioning of the head and neck is important for an optimal laryngeal visualization. Traditionally, sniffing position (SP) is recommended to provide a superior glottic visualization, during direct laryngoscopy, enhancing the ease of intubation. Various studies in the last decade of this belief have challenged the need for sniffing position during intubation. We conducted a prospective study comparing the sniffing head position with simple head extension to study the laryngoscopic view and intubation difficulty during direct laryngoscopy. Five-hundred patients were included in this study and randomly distributed to SP or simple head extension. In the sniffing group, an incompressible head ring was placed under the head to raise its height by 7 cm from the neutral plane followed by maximal extension of the head. In the simple extension group, no headrest was placed under the head; however, maximal head extension was given at the time of laryngoscopy. Various factors as ability to mask ventilate, laryngoscopic visualization, intubation difficulty, and posture of the anesthesiologist during laryngoscopy and tracheal intubation were noted. In the incidence of difficult laryngoscopy (Cormack Grade III and IV), Intubation Difficulty Scale (IDS score) was compared between the two groups. There was no significant difference between two groups in Cormack grades. The IDS score differed significantly between sniffing group and simple extension group ( P = 0.000) with an increased difficulty during intubation in the simple head extension. Patients with simple head extension needed more lifting force, increased use of external laryngeal manipulation, and an increased use of alternate techniques during intubation when compared to SP. We conclude that compared to the simple head extension position, the SP should be used as a standard head position for intubation attempts under general anesthesia.

  7. Factorized Runge-Kutta-Chebyshev Methods

    NASA Astrophysics Data System (ADS)

    O'Sullivan, Stephen

    2017-05-01

    The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc

  8. [Upper limb functional assessment scale for children with Duchenne muscular dystrophy and Spinal muscular atrophy].

    PubMed

    Escobar, Raúl G; Lucero, Nayadet; Solares, Carmen; Espinoza, Victoria; Moscoso, Odalie; Olguín, Polín; Muñoz, Karin T; Rosas, Ricardo

    2016-08-16

    Duchenne muscular dystrophy (DMD) and Spinal muscular atrophy (SMA) causes significant disability and progressive functional impairment. Readily available instruments that assess functionality, especially in advanced stages of the disease, are required to monitor the progress of the disease and the impact of therapeutic interventions. To describe the development of a scale to evaluate upper limb function (UL) in patients with DMD and SMA, and describe its validation process, which includes self-training for evaluators. The development of the scale included a review of published scales, an exploratory application of a pilot scale in healthy children and those with DMD, self-training of evaluators in applying the scale using a handbook and video tutorial, and assessment of a group of children with DMD and SMA using the final scale. Reliability was assessed using Cronbach and Kendall concordance and with intra and inter-rater test-retest, and validity with concordance and factorial analysis. A high level of reliability was observed, with high internal consistency (Cronbach α=0.97), and inter-rater (Kendall W=0.96) and intra-rater concordance (r=0.97 to 0.99). The validity was demonstrated by the absence of significant differences between results by different evaluators with an expert evaluator (F=0.023, P>.5), and by the factor analysis that showed that four factors account for 85.44% of total variance. This scale is a reliable and valid tool for assessing UL functionality in children with DMD and SMA. It is also easily implementable due to the possibility of self-training and the use of simple and inexpensive materials. Copyright © 2016 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Gossip spread in social network Models

    NASA Astrophysics Data System (ADS)

    Johansson, Tobias

    2017-04-01

    Gossip almost inevitably arises in real social networks. In this article we investigate the relationship between the number of friends of a person and limits on how far gossip about that person can spread in the network. How far gossip travels in a network depends on two sets of factors: (a) factors determining gossip transmission from one person to the next and (b) factors determining network topology. For a simple model where gossip is spread among people who know the victim it is known that a standard scale-free network model produces a non-monotonic relationship between number of friends and expected relative spread of gossip, a pattern that is also observed in real networks (Lind et al., 2007). Here, we study gossip spread in two social network models (Toivonen et al., 2006; Vázquez, 2003) by exploring the parameter space of both models and fitting them to a real Facebook data set. Both models can produce the non-monotonic relationship of real networks more accurately than a standard scale-free model while also exhibiting more realistic variability in gossip spread. Of the two models, the one given in Vázquez (2003) best captures both the expected values and variability of gossip spread.

  10. Patterns and multi-scale drivers of phytoplankton species richness in temperate peri-urban lakes.

    PubMed

    Catherine, Arnaud; Selma, Maloufi; Mouillot, David; Troussellier, Marc; Bernard, Cécile

    2016-07-15

    Local species richness (SR) is a key characteristic affecting ecosystem functioning. Yet, the mechanisms regulating phytoplankton diversity in freshwater ecosystems are not fully understood, especially in peri-urban environments where anthropogenic pressures strongly impact the quality of aquatic ecosystems. To address this issue, we sampled the phytoplankton communities of 50 lakes in the Paris area (France) characterized by a large gradient of physico-chemical and catchment-scale characteristics. We used large phytoplankton datasets to describe phytoplankton diversity patterns and applied a machine-learning algorithm to test the degree to which species richness patterns are potentially controlled by environmental factors. Selected environmental factors were studied at two scales: the lake-scale (e.g. nutrients concentrations, water temperature, lake depth) and the catchment-scale (e.g. catchment, landscape and climate variables). Then, we used a variance partitioning approach to evaluate the interaction between lake-scale and catchment-scale variables in explaining local species richness. Finally, we analysed the residuals of predictive models to identify potential vectors of improvement of phytoplankton species richness predictive models. Lake-scale and catchment-scale drivers provided similar predictive accuracy of local species richness (R(2)=0.458 and 0.424, respectively). Both models suggested that seasonal temperature variations and nutrient supply strongly modulate local species richness. Integrating lake- and catchment-scale predictors in a single predictive model did not provide increased predictive accuracy; therefore suggesting that the catchment-scale model probably explains observed species richness variations through the impact of catchment-scale variables on in-lake water quality characteristics. Models based on catchment characteristics, which include simple and easy to obtain variables, provide a meaningful way of predicting phytoplankton species richness in temperate lakes. This approach may prove useful and cost-effective for the management and conservation of aquatic ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Topological structure dynamics revealing collective evolution in active nematics

    PubMed Central

    Shi, Xia-qing; Ma, Yu-qiang

    2013-01-01

    Topological defects frequently emerge in active matter like bacterial colonies, cytoskeleton extracts on substrates, self-propelled granular or colloidal layers and so on, but their dynamical properties and the relations to large-scale organization and fluctuations in these active systems are seldom touched. Here we reveal, through a simple model for active nematics using self-driven hard elliptic rods, that the excitation, annihilation and transportation of topological defects differ markedly from those in non-active media. These dynamical processes exhibit strong irreversibility in active nematics in the absence of detailed balance. Moreover, topological defects are the key factors in organizing large-scale dynamic structures and collective flows, resulting in multi-spatial temporal effects. These findings allow us to control the self-organization of active matter through topological structures. PMID:24346733

  12. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  13. Neural Correlates of Biased Responses: The Negative Method Effect in the Rosenberg Self-Esteem Scale Is Associated with Right Amygdala Volume.

    PubMed

    Wang, Yinan; Kong, Feng; Huang, Lijie; Liu, Jia

    2016-10-01

    Self-esteem is a widely studied construct in psychology that is typically measured by the Rosenberg Self-Esteem Scale (RSES). However, a series of cross-sectional and longitudinal studies have suggested that a simple and widely used unidimensional factor model does not provide an adequate explanation of RSES responses due to method effects. To identify the neural correlates of the method effect, we sought to determine whether and how method effects were associated with the RSES and investigate the neural basis of these effects. Two hundred and eighty Chinese college students (130 males; mean age = 22.64 years) completed the RSES and underwent magnetic resonance imaging (MRI). Behaviorally, method effects were linked to both positively and negatively worded items in the RSES. Neurally, the right amygdala volume negatively correlated with the negative method factor, while the hippocampal volume positively correlated with the general self-esteem factor in the RSES. The neural dissociation between the general self-esteem factor and negative method factor suggests that there are different neural mechanisms underlying them. The amygdala is involved in modulating negative affectivity; therefore, the current study sheds light on the nature of method effects that are related to self-report with a mix of positively and negatively worded items. © 2015 Wiley Periodicals, Inc.

  14. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  15. Mathematics and the Internet: A Source of Enormous Confusion and Great Potential

    DTIC Science & Technology

    2009-05-01

    free Internet Myth The story recounted below of the scale-free nature of the Internet seems convincing, sound, and al- most too good to be true ...models. In fact, much of the initial excitement in the nascent field of network science can be attributed to an ear- ly and appealingly simple class...this new class of networks, com- monly referred to as scale-free networks. The term scale-free derives from the simple observation that power-law node

  16. Alignment hierarchies: engineering architecture from the nanometre to the micrometre scale.

    PubMed

    Kureshi, Alvena; Cheema, Umber; Alekseeva, Tijna; Cambrey, Alison; Brown, Robert

    2010-12-06

    Natural tissues are built of metabolites, soluble proteins and solid extracellular matrix components (largely fibrils) together with cells. These are configured in highly organized hierarchies of structure across length scales from nanometre to millimetre, with alignments that are dominated by anisotropies in their fibrillar matrix. If we are to successfully engineer tissues, these hierarchies need to be mimicked with an understanding of the interaction between them. In particular, the movement of different elements of the tissue (e.g. molecules, cells and bulk fluids) is controlled by matrix structures at distinct scales. We present three novel systems to introduce alignment of collagen fibrils, cells and growth factor gradients within a three-dimensional collagen scaffold using fluid flow, embossing and layering of construct. Importantly, these can be seen as different parts of the same hierarchy of three-dimensional structure, as they are all formed into dense collagen gels. Fluid flow aligns collagen fibrils at the nanoscale, embossed topographical features provide alignment cues at the microscale and introducing layered configuration to three-dimensional collagen scaffolds provides microscale- and mesoscale-aligned pathways for protein factor delivery as well as barriers to confine protein diffusion to specific spatial directions. These seemingly separate methods can be employed to increase complexity of simple extracellular matrix scaffolds, providing insight into new approaches to directly fabricate complex physical and chemical cues at different hierarchical scales, similar to those in natural tissues.

  17. Adaptation and validation of the Inventory of Family Protective Factors for the Portuguese culture

    PubMed Central

    Augusto, Cláudia Cristina Vieira Carvalho de Oliveira Ferreira; Araújo, Beatriz Rodrigues; Rodrigues, Vítor Manuel Costa Pereira; de Figueiredo, Maria do Céu Aguiar Barbieri

    2014-01-01

    OBJECTIVES: to adapt and validate the Inventory of Family Protective Factors (IFPF) for the Portuguese culture. This instrument assesses protective factors that contribute to family resilience. Studies addressing resilience are embedded within the salutogenic paradigm, i.e. it addresses protective factors of individuals or groups without underestimating risk factors or vulnerability. METHOD: in order to assess the IFPF's linguistic and conceptual equivalence, the instrument was translated, retro-translated and the think-aloud protocol was used. We then verified the instrument's sensitiveness, reliability and validity of results to assess its psychometric characteristics. A factor analysis was performed of the principal components with varimax rotation of the scale's items and Cronbach's alpha coefficient was calculated for each dimension. A total of 85 families with disabled children, selected through simple random sampling, self-administered the instrument. RESULTS: the IFPF presents psychometric characteristics that are appropriate for the Portuguese population (Cronbach's alpha = .90). CONCLUSION: the IFPF was adapted and validated for the Portuguese culture and is an instrument to be used in studies intended to assess protective factors of family resilience. PMID:25591096

  18. Adaptation and validation of the Inventory of Family Protective Factors for the Portuguese culture.

    PubMed

    Augusto, Cláudia Cristina Vieira Carvalho de Oliveira Ferreira; Araújo, Beatriz Rodrigues; Rodrigues, Vítor Manuel Costa Pereira; de Figueiredo, Maria do Céu Aguiar Barbieri

    2014-01-01

    to adapt and validate the Inventory of Family Protective Factors (IFPF) for the Portuguese culture. This instrument assesses protective factors that contribute to family resilience. Studies addressing resilience are embedded within the salutogenic paradigm, i.e. it addresses protective factors of individuals or groups without underestimating risk factors or vulnerability. in order to assess the IFPF's linguistic and conceptual equivalence, the instrument was translated, retro-translated and the think-aloud protocol was used. We then verified the instrument's sensitiveness, reliability and validity of results to assess its psychometric characteristics. A factor analysis was performed of the principal components with varimax rotation of the scale's items and Cronbach's alpha coefficient was calculated for each dimension. A total of 85 families with disabled children, selected through simple random sampling, self-administered the instrument. the IFPF presents psychometric characteristics that are appropriate for the Portuguese population (Cronbach's alpha = .90). the IFPF was adapted and validated for the Portuguese culture and is an instrument to be used in studies intended to assess protective factors of family resilience.

  19. Worker education level is a factor in self-compliance with dust-preventive methods among small-scale agate industrial workers.

    PubMed

    Aggarwal, Bhagwan D

    2013-01-01

    High incidences of silicosis are continuing to be reported among the agate workers of small-scale household agate processing units in the Khambhat region of Gujarat (India). The objective of this study was to investigate reasons behind the high prevalence of silicosis, and factors affecting the noncompliance with preventive methods among agate workers. The study was conducted using a questionnaire-based structured interview method among 82 agate workers in Khambhat to assess their awareness level about silicosis and preventive methods, existing morbidity, worker's attitude toward health, and the prevalence of actual use of preventive methods to avoid silica exposure. The majority of the workers (55%) were aware of silicosis and the harmful effects of silica dust exposure (72%) and knew about simple preventive methods to avoid silica dust exposure (80%), but only a minority of the workers (22%) were actually using the simple and available dust-preventive methods. Only 9% of the uneducated workers were using the preventive methods, while usage was higher among educated workers (28%), who had five or more years of schooling, and these workers had fewer health conditions or less morbidity. Gender and job duration had no effect on the usage of dust-preventive methods. The data suggest that noncompliance with use of dust-preventive methods could be the reason behind the higher prevalence of silicosis and health morbidity in agate workers, and that years of schooling plays a significant role in the increased usage and self-compliance with dust-preventive methods among agate workers.

  20. Assessing diel variation of CH4 flux from rice paddies through temperature patterns

    NASA Astrophysics Data System (ADS)

    Centeno, Caesar Arloo R.; Alberto, Ma Carmelita R.; Wassmann, Reiner; Sander, Bjoern Ole

    2017-10-01

    The diel variation in methane (CH4) flux from irrigated rice was characterized during the dry and wet cropping seasons in 2013 and 2014 using the eddy covariance (EC) technique. The EC technique has the advantage of obtaining measurements of fluxes at an extremely high temporal resolution (10Hz), meaning it records 36,000 measurements per hour. The EC measurements can very well capture the temporal variations of the diel (both diurnal and nocturnal) fluxes of CH4 and the environmental factors (temperature, surface energy flux, and gross ecosystem photosynthesis) at 30-min intervals. The information generated by this technique is important to enhance our mechanistic understanding of the different factors affecting the landscape scale diel CH4 flux. Distinct diel patterns of CH4 flux were observed when the data were partitioned into different cropping periods (pre-planting, growth, and fallow). The temporal variations of the diel CH4 flux during the dry seasons were more pronounced than during the wet seasons because the latter had so much climatic disturbance from heavy monsoon rains and occasional typhoons. Pearson correlation analysis and Granger causality test were used to confirm if the environmental factors evaluated were not only correlated with but also Granger-causing the diel CH4 flux. Soil temperature at 2.5 cm depth (Ts 2.5 cm) can be used as simple proxy for predicting diel variations of CH4 fluxes in rice paddies using simple linear regression during both the dry and wet seasons. This simple site-specific temperature response function can be used for gap-filling CH4 flux data for improving the estimates of CH4 source strength from irrigated rice production.

  1. Development and application of a screening model for evaluating bioenhanced dissolution in DNAPL source zones

    NASA Astrophysics Data System (ADS)

    Phelan, Thomas J.; Abriola, Linda M.; Gibson, Jenny L.; Smits, Kathleen M.; Christ, John A.

    2015-12-01

    In-situ bioremediation, a widely applied treatment technology for source zones contaminated with dense non-aqueous phase liquids (DNAPLs), has proven economical and reasonably efficient for long-term management of contaminated sites. Successful application of this remedial technology, however, requires an understanding of the complex interaction of transport, mass transfer, and biotransformation processes. The bioenhancement factor, which represents the ratio of DNAPL mass transfer under microbially active conditions to that which would occur under abiotic conditions, is commonly used to quantify the effectiveness of a particular bioremediation remedy. To date, little research has been directed towards the development and validation of methods to predict bioenhancement factors under conditions representative of real sites. This work extends an existing, first-order, bioenhancement factor expression to systems with zero-order and Monod kinetics, representative of many source-zone scenarios. The utility of this model for predicting the bioenhancement factor for previously published laboratory and field experiments is evaluated. This evaluation demonstrates the applicability of these simple bioenhancement factors for preliminary experimental design and analysis, and for assessment of dissolution enhancement in ganglia-contaminated source zones. For ease of application, a set of nomographs is presented that graphically depicts the dependence of bioenhancement factor on physicochemical properties. Application of these nomographs is illustrated using data from a well-documented field site. Results suggest that this approach can successfully capture field-scale, as well as column-scale, behavior. Sensitivity analyses reveal that bioenhanced dissolution will critically depend on in-situ biomass concentrations.

  2. Psychometric evaluation of the Polish adaptation of the Hill-Bone Compliance to High Blood Pressure Therapy Scale.

    PubMed

    Uchmanowicz, Izabella; Jankowska-Polańska, Beata; Chudiak, Anna; Szymańska-Chabowska, Anna; Mazur, Grzegorz

    2016-05-10

    Development of simple instruments for the determination of the level of adherence in patients with high blood pressure is the subject of ongoing research. One such instrument, gaining growing popularity worldwide, is the Hill-Bone Compliance to High Blood Pressure Therapy. The aim of this study was to adapt and to test the reliability of the Polish version of Hill-Bone Compliance to High Blood Pressure Therapy Scale. A standard guideline was used for the translation and cultural adaptation of the English version of the Hill-Bone Compliance to High Blood Pressure Therapy Scale into Polish. The study included 117 Polish patients with hypertension aged between 27 and 90 years, among them 53 men and 64 women. Cronbach's alpha was used for analysing the internal consistency of the scale. The mean score in the reduced sodium intake subscale was M = 5.7 points (standard deviation SD = 1.6 points). The mean score in the appointment-keeping subscale was M = 3.4 points (standard deviation SD = 1.4 points). The mean score in the medication-taking subscale was M = 11.6 points (standard deviation SD = 3.3 points). In the principal component analysis, the three-factor system (1 - medication-taking, 2 - appointment-keeping, 3 - reduced sodium intake) accounted for 53 % of total variance. All questions had factor loadings > 0.4. The medication-taking subscale: most questions (6 out of 9) had the highest loadings with Factor 1. The appointment-keeping subscale: all questions (2 out of 2) had the highest loadings with Factor 2. The reduced sodium intake subscale: most questions (2 out of 3) had the highest loadings with Factor 3. Goodness of fit was tested at chi(2) = 248.87; p < 0.001. The Cronbach's alpha score for the entire questionnaire was 0.851. The Hill-Bone Compliance to High Blood Pressure Therapy Scale proved to be suitable for use in the Polish population. Use of this screening tool for the assessment of adherence to BP treatment is recommended.

  3. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  4. Temporal variability in phosphorus transfers: classifying concentration-discharge event dynamics

    NASA Astrophysics Data System (ADS)

    Haygarth, P.; Turner, B. L.; Fraser, A.; Jarvis, S.; Harrod, T.; Nash, D.; Halliwell, D.; Page, T.; Beven, K.

    The importance of temporal variability in relationships between phosphorus (P) concentration (Cp) and discharge (Q) is linked to a simple means of classifying the circumstances of Cp-Q relationships in terms of functional types of response. New experimental data at the upstream interface of grassland soil and catchment systems at a range of scales (lysimeters to headwaters) in England and Australia are used to demonstrate the potential of such an approach. Three types of event are defined as Types 1-3, depending on whether the relative change in Q exceeds the relative change in Cp (Type 1), whether Cp and Q are positively inter-related (Type 2) and whether Cp varies yet Q is unchanged (Type 3). The classification helps to characterise circumstances that can be explained mechanistically in relation to (i) the scale of the study (with a tendency towards Type 1 in small scale lysimeters), (ii) the form of P with a tendency for Type 1 for soluble (i.e., <0.45 μm P forms) and (iii) the sources of P with Type 3 dominant where P availability overrides transport controls. This simple framework provides a basis for development of a more complex and quantitative classification of Cp-Q relationships that can be developed further to contribute to future models of P transfer and delivery from slope to stream. Studies that evaluate the temporal dynamics of the transfer of P are currently grossly under-represented in comparison with models based on static/spatial factors.

  5. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  6. Simulated binding of transcription factors to active and inactive regions folds human chromosomes into loops, rosettes and topological domains

    PubMed Central

    Brackley, Chris A.; Johnson, James; Kelly, Steven; Cook, Peter R.; Marenduzzo, Davide

    2016-01-01

    Biophysicists are modeling conformations of interphase chromosomes, often basing the strengths of interactions between segments distant on the genetic map on contact frequencies determined experimentally. Here, instead, we develop a fitting-free, minimal model: bivalent or multivalent red and green ‘transcription factors’ bind to cognate sites in strings of beads (‘chromatin’) to form molecular bridges stabilizing loops. In the absence of additional explicit forces, molecular dynamic simulations reveal that bound factors spontaneously cluster—red with red, green with green, but rarely red with green—to give structures reminiscent of transcription factories. Binding of just two transcription factors (or proteins) to active and inactive regions of human chromosomes yields rosettes, topological domains and contact maps much like those seen experimentally. This emergent ‘bridging-induced attraction’ proves to be a robust, simple and generic force able to organize interphase chromosomes at all scales. PMID:27060145

  7. Switching between simple cognitive tasks: the interaction of top-down and bottom-up factors

    NASA Technical Reports Server (NTRS)

    Ruthruff, E.; Remington, R. W.; Johnston, J. C.

    2001-01-01

    How do top-down factors (e.g., task expectancy) and bottom-up factors (e.g., task recency) interact to produce an overall level of task readiness? This question was addressed by factorially manipulating task expectancy and task repetition in a task-switching paradigm. The effects of expectancy and repetition on response time tended to interact underadditively, but only because the traditional binary task-repetition variable lumps together all switch trials, ignoring variation in task lag. When the task-recency variable was scaled continuously, all 4 experiments instead showed additivity between expectancy and recency. The results indicated that expectancy and recency influence different stages of mental processing. One specific possibility (the configuration-execution model) is that task expectancy affects the time required to configure upcoming central operations, whereas task recency affects the time required to actually execute those central operations.

  8. Emissions of air pollutants from scented candles burning in a test chamber

    NASA Astrophysics Data System (ADS)

    Derudi, Marco; Gelosa, Simone; Sliepcevich, Andrea; Cattaneo, Andrea; Rota, Renato; Cavallo, Domenico; Nano, Giuseppe

    2012-08-01

    Burning of scented candles in indoor environment can release a large number of toxic chemicals. However, in spite of the large market penetration of scented candles, very few works investigated their organic pollutants emissions. This paper investigates volatile organic compounds emissions, with particular reference to the priority indoor pollutants identified by the European Commission, from the burning of scented candles in a laboratory-scale test chamber. It has been found that BTEX and PAHs emission factors show large differences among different candles, possibly due to the raw paraffinic material used, while aldehydes emission factors seem more related to the presence of additives. This clearly evidences the need for simple and cheap methodologies to measure the emission factors of commercial candles in order to foresee the expected pollutant concentration in a given indoor environment and compare it with health safety standards.

  9. Effect of lensing non-Gaussianity on the CMB power spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Antony; Pratten, Geraint, E-mail: antony@cosmologist.info, E-mail: geraint.pratten@gmail.com

    2016-12-01

    Observed CMB anisotropies are lensed, and the lensed power spectra can be calculated accurately assuming the lensing deflections are Gaussian. However, the lensing deflections are actually slightly non-Gaussian due to both non-linear large-scale structure growth and post-Born corrections. We calculate the leading correction to the lensed CMB power spectra from the non-Gaussianity, which is determined by the lensing bispectrum. Assuming no primordial non-Gaussianity, the lowest-order result gives ∼ 0.3% corrections to the BB and EE polarization spectra on small-scales. However we show that the effect on EE is reduced by about a factor of two by higher-order Gaussian lensing smoothing,more » rendering the total effect safely negligible for the foreseeable future. We give a simple analytic model for the signal expected from skewness of the large-scale lensing field; the effect is similar to a net demagnification and hence a small change in acoustic scale (and therefore out of phase with the dominant lensing smoothing that predominantly affects the peaks and troughs of the power spectrum).« less

  10. Scaling laws of passive-scalar diffusion in the interstellar medium

    NASA Astrophysics Data System (ADS)

    Colbrook, Matthew J.; Ma, Xiangcheng; Hopkins, Philip F.; Squire, Jonathan

    2017-05-01

    Passive-scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood in the highly supersonic, magnetized, shearing regime relevant for the ISM. We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic turbulence. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disc). The evolution of the scalar distribution is not the same as obtained using simple, constant 'effective diffusivity' as in Smagorinsky models, because the scale dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that apply only to ensemble behaviours (assuming many different, random scalar injection sites): individual Lagrangian 'patches' remain coherent (poorly mixed) and simply advect for a large number of turbulent flow-crossing times.

  11. Reliability and validity of the work and social adjustment scale in phobic disorders.

    PubMed

    Mataix-Cols, David; Cowley, Amy J; Hankins, Matthew; Schneider, Andreas; Bachofen, Martin; Kenwright, Mark; Gega, Lina; Cameron, Rachel; Marks, Isaac M

    2005-01-01

    The Work and Social Adjustment Scale (WSAS) is a simple widely used 5-item measure of disability whose psychometric properties need more analysis in phobic disorders. The reliability, factor structure, validity, and sensitivity to change of the WSAS were studied in 205 phobic patients (73 agoraphobia, 62 social phobia, and 70 specific phobia) who participated in various open and randomized trials of self-exposure therapy. Internal consistency of the WSAS was excellent in all phobics pooled and in agoraphobics and social phobics separately. Principal components analysis extracted a single general factor of disability. Specific phobics gave less consistent ratings across WSAS items, suggesting that some items were less relevant to their problem. Internal consistency was marginally higher for self-ratings than clinician ratings of the WSAS. Self-ratings and clinician ratings correlated highly though patients tended to rate themselves as more disabled than clinicians did. WSAS total scores reflected differences in phobic severity and improvement with treatment. The WSAS is a valid, reliable, and change-sensitive measure of work/social and other adjustment in phobic disorders, especially in agoraphobia and social phobia.

  12. Study of stress, self-esteem and depression in medical students and effect of music on perceived stress.

    PubMed

    Baste, Vrushali S; Gadkari, Jayashree V

    2014-01-01

    Medical students are exposed to many stressors and if stress is perceived negatively or becomes excessive can affect academic performance and health adversely. The objective of this study was to assess stress, predominant stressor and effect of music on perceived stress. 90 undergraduate students were selected randomly. A written questionnaire about personal information, stressful factors, ways to cope up stress, Rosenberg self-esteem scale (Rosenberg, 1965) and 'Quick Inventory of Depressive Symptomatology' self-rated 16 (QIDS-SR-16) was given.45.6% Students had mild stress, 7.7% students had moderate stress and 1.1% students had severe stress. Academic factors were the predominant cause of stress in most students, followed by physical, social and emotional. On Rosenberg self-esteem scale (Rosenberg, 1965) 85.6% students had high self-esteem and on QIDS-SR16 50% students had depression. Effect of music on perceived stress was statistically significant. Medical curriculum is associated with increased stress in students. Music can be used as simple, inexpensive and effective therapy for stress.

  13. Supersymmetry from typicality: TeV-scale gauginos and PeV-scale squarks and sleptons.

    PubMed

    Nomura, Yasunori; Shirai, Satoshi

    2014-09-12

    We argue that under a set of simple assumptions the multiverse leads to low-energy supersymmetry with the spectrum often called spread or minisplit supersymmetry: the gauginos are in the TeV region with the other superpartners 2 or 3 orders of magnitude heavier. We present a particularly simple realization of supersymmetric grand unified theory using this idea.

  14. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity

    Treesearch

    Harbin Li; Steven G. McNulty

    2007-01-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...

  15. Cross-cultural adaptation of the short-form condom attitude scale: validity assessment in a sub-sample of rural-to-urban migrant workers in Bangladesh

    PubMed Central

    2013-01-01

    Background The reliable and valid measurement of attitudes towards condom use are essential to assist efforts to design population specific interventions aimed at promoting positive attitude towards, and increased use of condoms. Although several studies, mostly in English speaking western world, have demonstrated the utility of condom attitude scales, very limited culturally relevant condom attitude measures have been developed till to date. We have developed a scale and evaluated its psychometric properties in a sub-sample of rural-to-urban migrant workers in Bangladesh. Methods This paper reports mostly on cross-sectional survey components of a mixed methods sexual health research in Bangladesh. The survey sample (n = 878) comprised rural-to-urban migrant taxi drivers (n = 437) and restaurant workers (n = 441) in Dhaka (aged 18–35 years). The study also involved focus group sessions with same populations to establish the content validity and cultural equivalency of the scale. The current scale was administered with a large sexual health survey questionnaire and consisted of 10 items. Quantitative and qualitative data were assessed with statistical and thematic analysis, respectively, and then presented. Results The participants found the scale simple and easy to understand and use. The internal consistency (α) of the scale was 0.89 with high construct validity (the first component accounted for about 52% of variance and second component about 20% of the total variance with an Eigen-value for both factors greater than one). The test-retest reliability (repeatability) was also found satisfactory with high inter-item correlations (the majority of the intra-class correlation coefficient values was above 2 and was significant for all items on the scale, p < 0.001). The 2-week repeatability assessed by the Pearson product–moment correlation coefficient was 0.75. Conclusion The results indicated that Bengali version of the scale have good metric properties for assessing attitudes toward condom use. Validated scale is a short, simple and reliable instrument for measuring attitudes towards condom use in vulnerable populations like current study sample. This culturally-customized scale can be used to monitor the progress of condom uptake and promotion activities in Bangladesh or similar settings. PMID:23510383

  16. Cross-cultural adaptation of the short-form condom attitude scale: validity assessment in a sub-sample of rural-to-urban migrant workers in Bangladesh.

    PubMed

    Roy, Tapash; Anderson, Claire; Evans, Catrin; Rahman, Mohammad Shafiqur; Rahman, Mosiur

    2013-03-19

    The reliable and valid measurement of attitudes towards condom use are essential to assist efforts to design population specific interventions aimed at promoting positive attitude towards, and increased use of condoms. Although several studies, mostly in English speaking western world, have demonstrated the utility of condom attitude scales, very limited culturally relevant condom attitude measures have been developed till to date. We have developed a scale and evaluated its psychometric properties in a sub-sample of rural-to-urban migrant workers in Bangladesh. This paper reports mostly on cross-sectional survey components of a mixed methods sexual health research in Bangladesh. The survey sample (n = 878) comprised rural-to-urban migrant taxi drivers (n = 437) and restaurant workers (n = 441) in Dhaka (aged 18-35 years). The study also involved focus group sessions with same populations to establish the content validity and cultural equivalency of the scale. The current scale was administered with a large sexual health survey questionnaire and consisted of 10 items. Quantitative and qualitative data were assessed with statistical and thematic analysis, respectively, and then presented. The participants found the scale simple and easy to understand and use. The internal consistency (α) of the scale was 0.89 with high construct validity (the first component accounted for about 52% of variance and second component about 20% of the total variance with an Eigen-value for both factors greater than one). The test-retest reliability (repeatability) was also found satisfactory with high inter-item correlations (the majority of the intra-class correlation coefficient values was above 2 and was significant for all items on the scale, p < 0.001). The 2-week repeatability assessed by the Pearson product-moment correlation coefficient was 0.75. The results indicated that Bengali version of the scale have good metric properties for assessing attitudes toward condom use. Validated scale is a short, simple and reliable instrument for measuring attitudes towards condom use in vulnerable populations like current study sample. This culturally-customized scale can be used to monitor the progress of condom uptake and promotion activities in Bangladesh or similar settings.

  17. Superfluidity in Strongly Interacting Fermi Systems with Applications to Neutron Stars

    NASA Astrophysics Data System (ADS)

    Khodel, Vladimir

    The rotational dynamics and cooling history of neutron stars is influenced by the superfluid properties of nucleonic matter. In this thesis a novel separation technique is applied to the analysis of the gap equation for neutron matter. It is shown that the problem can be recast into two tasks: solving a simple system of linear integral equations for the shape functions of various components of the gap function and solving a system of non-linear algebraic equations for their scale factors. Important simplifications result from the fact that the ratio of the gap amplitude to the Fermi energy provides a small parameter in this problem. The relationship between the analytic structure of the shape functions and the density interval for the existence of superfluid gap is discussed. It is shown that in 1S0 channel the position of the first zero of the shape function gives an estimate of the upper critical density. The relation between the resonant behavior of the two-neutron interaction in this channel and the density dependence of the gap is established. The behavior of the gap in the limits of low and high densities is analyzed. Various approaches to calculation of the scale factors are considered: model cases, angular averaging, and perturbation theory. An optimization-based approach is proposed. The shape functions and scale factors for Argonne υ14 and υ18 potentials are determined in singlet and triplet channels. Dependence of the solution on the value of effective mass and medium polarization is studied.

  18. Dynamics of Active Layer Depth across Alaskan Tundra Ecosystems

    NASA Astrophysics Data System (ADS)

    Ma, C.; Zhang, X.; Song, X.; Xu, X.

    2016-12-01

    The thickness of the active layer, near-surface layer of Earth material above permafrost undergoing seasonal freezing and thawing, is of considerable importance in high-latitude environments because most physical, chemical, and biological processes in the permafrost region take place within it. The dynamics of active layer thickness (ALT) result from a combination of various factors including heat transfer, soil water content, soil texture, root density, stem density, moss layer thickness, organic layer thickness, etc. However, the magnitude and controls of ALT in the permafrost region remain uncertain. The purpose of this study is to improve our understanding of the dynamics of ALT across Alaskan tundra ecosystems and their controls at multiple scales, ranging from plots to entire Alaska. This study compiled a comprehensive dataset of ALT at site and regional scales across the Alaskan tundra ecosystems, and further analyzed ALT dynamics and their hierarchical controls. We found that air temperature played a predominant role on the seasonality of ALT, regulated by other physical and chemical factors including soil texture, moisture, and root density. The structural equation modeling (SEM) analysis confirmed the predominant role of physical controls (dominated by heat and soil properties), followed by chemical and biological factors. Then a simple empirical model was developed to reconstruct the ALT across the Alaska. The comparisons against field observational data show that the method used in this study is robust; the reconstructed time-series ALT across Alaska provides a valuable dataset source for understanding ALT and validating large-scale ecosystem models.

  19. Martian Atmospheric Modeling of Scale Factors for MarsGRAM 2005 and the MAVEN Project

    NASA Technical Reports Server (NTRS)

    McCullough, Chris

    2011-01-01

    For spacecraft missions to Mars, especially the navigation of Martian orbiters and landers, an extensive knowledge of the Martian atmosphere is extremely important. The generally-accepted NASA standard for modeling (MarsGRAM), which was developed at Marshall Space Flight Center. MarsGRAM is useful for task such as aerobraking, performance analysis and operations planning for aerobraking, entry descent and landing, and aerocapture. Unfortunately, the densities for the Martian atmosphere in MarsGRAM are based on table look-up and not on an analytical algorithm. Also, these values can vary drastically from the densities actually experienced by the spacecraft. This does not have much of an impact on simple integrations but drastically affects its usefulness in other applications, especially those in navigation. For example, the navigation team for the Mars Atmosphere Volatile Environment (MAVEN) Project uses MarsGRAM to target the desired atmospheric density for the orbiter's pariapse passage, its closet approach to the planet. After the satellite's passage through pariapsis the computed density is compared to the MarsGRAM model and a scale factor is assigned to the model to account for the difference. Therefore, large variations in the atmosphere from the model can cause unexpected deviations from the spacecraft's planned trajectory. In order to account for this, an analytic stochastic model of the scale factor's behavior is desired. The development of this model will allow for the MAVEN navigation team to determine the probability of various Martian atmospheric variations and their effects on the spacecraft.

  20. Development and validation of measures to assess prevention and control of AMR in hospitals.

    PubMed

    Flanagan, Mindy; Ramanujam, Rangaraj; Sutherland, Jason; Vaughn, Thomas; Diekema, Daniel; Doebbeling, Bradley N

    2007-06-01

    The rapid spread of antimicrobial resistance (AMR) in the US hospitals poses serious quality and safety problems. Expert panels, identifying strategies for optimizing antibiotic use and preventing AMR spread, have recommended hospitals undertake efforts to implement specific evidence-based practices. To develop and validate a measurement scale for assessing hospitals' efforts to implement recommended AMR prevention and control measures. Surveys were mailed to infection control professionals in a national sample of 670 US hospitals stratified by geographic region, bedsize, teaching status, and VA affiliation. : Four hundred forty-eight infection control professionals participated (67% response rate). Survey items measured implementation of guideline recommendations, practices for AMR monitoring and feedback, AMR-related outcomes (methicillin-resistant Staphylococcus aureus prevalence and outbreaks [MRSA]), and organizational features. "Derivation" and "validation" samples were randomly selected. Exploratory factor analysis was performed to identify factors underlying AMR prevention and control efforts. Multiple methods were used for validation. We identified 4 empirically distinct factors in AMR prevention and control: (1) practices for antimicrobial prescription/use, (2) information/resources for AMR control, (3) practices for isolating infected patients, and (4) organizational support for infection control policies. The Prevention and Control of Antimicrobial Resistance scale was reliable and had content and construct validity. MRSA prevalence was significantly lower in hospitals with higher resource/information availability and broader organizational support. The Prevention and Control of Antimicrobial Resistance scale offers a simple yet discriminating assessment of AMR prevention and control efforts. Use should complement assessment methods based exclusively on AMR outcomes.

  1. Semi-automated intra-operative fluoroscopy guidance for osteotomy and external-fixator.

    PubMed

    Lin, Hong; Samchukov, Mikhail L; Birch, John G; Cherkashin, Alexander

    2006-01-01

    This paper outlines a semi-automated intra-operative fluoroscopy guidance and monitoring approach for osteotomy and external-fixator application in orthopedic surgery. Intra-operative Guidance module is one component of the "LegPerfect Suite" developed for assisting the surgical correction of lower extremity angular deformity. The Intra-operative Guidance module utilizes information from the preoperative surgical planning module as a guideline to overlay (register) its bone outline semi-automatically with the bone edge from the real-time fluoroscopic C-Arm X-Ray image in the operating room. In the registration process, scaling factor is obtained automatically through matching a fiducial template in the fluoroscopic image and a marker in the module. A triangle metal plate, placed on the operating table is used as fiducial template. The area of template image within the viewing area of the fluoroscopy machine is obtained by the image processing techniques such as edge detection and Hough transformation to extract the template from other objects in the fluoroscopy image. The area of fiducial template from fluoroscopic image is then compared with the area of the marker from the planning so as to obtain the scaling factor. After the scaling factor is obtained, the user can use simple operations by mouse to shift and rotate the preoperative planning to overlay the bone outline from planning with the bone edge from fluoroscopy image. In this way osteotomy levels and external fixator positioning on the limb can guided by the computerized preoperative plan.

  2. High throughput exploration of process-property linkages in Al-6061 using instrumented spherical microindentation and microstructurally graded samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weaver, Jordan S.; Khosravani, Ali; Castillo, Andrew

    Recent spherical nanoindentation protocols have proven robust at capturing the local elastic-plastic response of polycrystalline metal samples at length scales much smaller than the grain size. In this work, we extend these protocols to length scales that include multiple grains to recover microindentation stress-strain curves. These new protocols are first established in this paper and then demonstrated for Al-6061 by comparing the measured indentation stress-strain curves with the corresponding measurements from uniaxial tension tests. More specifically, the scaling factors between the uniaxial yield strength and the indentation yield strength was determined to be about 1.9, which is significantly lower thanmore » the value of 2.8 used commonly in literature. Furthermore, the reasons for this difference are discussed. Second, the benefits of these new protocols in facilitating high throughput exploration of process-property relationships are demonstrated through a simple case study.« less

  3. High throughput exploration of process-property linkages in Al-6061 using instrumented spherical microindentation and microstructurally graded samples

    DOE PAGES

    Weaver, Jordan S.; Khosravani, Ali; Castillo, Andrew; ...

    2016-06-14

    Recent spherical nanoindentation protocols have proven robust at capturing the local elastic-plastic response of polycrystalline metal samples at length scales much smaller than the grain size. In this work, we extend these protocols to length scales that include multiple grains to recover microindentation stress-strain curves. These new protocols are first established in this paper and then demonstrated for Al-6061 by comparing the measured indentation stress-strain curves with the corresponding measurements from uniaxial tension tests. More specifically, the scaling factors between the uniaxial yield strength and the indentation yield strength was determined to be about 1.9, which is significantly lower thanmore » the value of 2.8 used commonly in literature. Furthermore, the reasons for this difference are discussed. Second, the benefits of these new protocols in facilitating high throughput exploration of process-property relationships are demonstrated through a simple case study.« less

  4. Quick clay and landslides of clayey soils.

    PubMed

    Khaldoun, Asmae; Moller, Peder; Fall, Abdoulaye; Wegdam, Gerard; De Leeuw, Bert; Méheust, Yves; Otto Fossum, Jon; Bonn, Daniel

    2009-10-30

    We study the rheology of quick clay, an unstable soil responsible for many landslides. We show that above a critical stress the material starts flowing abruptly with a very large viscosity decrease caused by the flow. This leads to avalanche behavior that accounts for the instability of quick clay soils. Reproducing landslides on a small scale in the laboratory shows that an additional factor that determines the violence of the slides is the inhomogeneity of the flow. We propose a simple yield stress model capable of reproducing the laboratory landslide data, allowing us to relate landslides to the measured rheology.

  5. Quick Clay and Landslides of Clayey Soils

    NASA Astrophysics Data System (ADS)

    Khaldoun, Asmae; Moller, Peder; Fall, Abdoulaye; Wegdam, Gerard; de Leeuw, Bert; Méheust, Yves; Otto Fossum, Jon; Bonn, Daniel

    2009-10-01

    We study the rheology of quick clay, an unstable soil responsible for many landslides. We show that above a critical stress the material starts flowing abruptly with a very large viscosity decrease caused by the flow. This leads to avalanche behavior that accounts for the instability of quick clay soils. Reproducing landslides on a small scale in the laboratory shows that an additional factor that determines the violence of the slides is the inhomogeneity of the flow. We propose a simple yield stress model capable of reproducing the laboratory landslide data, allowing us to relate landslides to the measured rheology.

  6. Computational and Matrix Isolation Studies of (2- and 3-Furyl)methylene

    DTIC Science & Technology

    1994-01-01

    ynal, (Appendix 3) Simple HF calculations using the 6-31 G basis set + ZPE (zero point energy correction applied) predict 2.2 to be more stable in both...QCISD(T)/6-31 1 G** + ZPE predict the triplet to more stable by 2.9 Kcal/mol. However, calculations using MP4SDTQ/6-31 1 G + ZPE predict the singlet to...calculated frequencies were scaled by a factor of 0.9. 53 Table 2.30 Calculated ZPE for 2-Oxabicyclo(3.1.0]hexa-3,5-diene.a Zero Point Energy 49.9 (KcaVmol

  7. Delensing CMB polarization with external datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kendrick M.; Hanson, Duncan; LoVerde, Marilena

    2012-06-01

    One of the primary scientific targets of current and future CMB polarization experiments is the search for a stochastic background of gravity waves in the early universe. As instrumental sensitivity improves, the limiting factor will eventually be B-mode power generated by gravitational lensing, which can be removed through use of so-called ''delensing'' algorithms. We forecast prospects for delensing using lensing maps which are obtained externally to CMB polarization: either from large-scale structure observations, or from high-resolution maps of CMB temperature. We conclude that the forecasts in either case are not encouraging, and that significantly delensing large-scale CMB polarization requires high-resolutionmore » polarization maps with sufficient sensitivity to measure the lensing B-mode. We also present a simple formalism for including delensing in CMB forecasts which is computationally fast and agrees well with Monte Carlos.« less

  8. Risk perception in epidemic modeling

    NASA Astrophysics Data System (ADS)

    Bagnoli, Franco; Liò, Pietro; Sguanci, Luca

    2007-12-01

    We investigate the effects of risk perception in a simple model of epidemic spreading. We assume that the perception of the risk of being infected depends on the fraction of neighbors that are ill. The effect of this factor is to decrease the infectivity, that therefore becomes a dynamical component of the model. We study the problem in the mean-field approximation and by numerical simulations for regular, random, and scale-free networks. We show that for homogeneous and random networks, there is always a value of perception that stops the epidemics. In the “worst-case” scenario of a scale-free network with diverging input connectivity, a linear perception cannot stop the epidemics; however, we show that a nonlinear increase of the perception risk may lead to the extinction of the disease. This transition is discontinuous, and is not predicted by the mean-field analysis.

  9. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  10. Generalized sub-Schawlow-Townes laser linewidths via material dispersion

    NASA Astrophysics Data System (ADS)

    Pillay, Jason Cornelius; Natsume, Yuki; Stone, A. Douglas; Chong, Y. D.

    2014-03-01

    A recent S-matrix-based theory of the quantum-limited linewidth, which is applicable to general lasers, including spatially nonuniform laser cavities operating above threshold, is analyzed in various limits. For broadband gain, a simple interpretation of the Petermann and bad-cavity factors is presented in terms of geometric relations between the zeros and poles of the S matrix. When there is substantial dispersion, on the frequency scale of the cavity lifetime, the theory yields a generalization of the bad-cavity factor, which was previously derived for spatially uniform one-dimensional lasers. This effect can lead to sub-Schawlow-Townes linewidths in lasers with very narrow gain widths. We derive a formula for the linewidth in terms of the lasing mode functions, which has accuracy comparable to the previous formula involving the residue of the lasing pole. These results for the quantum-limited linewidth are valid even in the regime of strong line pulling and spatial hole burning, where the linewidth cannot be factorized into independent Petermann and bad-cavity factors.

  11. Cultural adaptation in measuring common client characteristics with an urban Mainland Chinese sample.

    PubMed

    Song, Xiaoxia; Anderson, Timothy; Beutler, Larry E; Sun, Shijin; Wu, Guohong; Kimpara, Satoko

    2015-01-01

    This study aimed to develop a culturally adapted version of the Systematic Treatment Selection-Innerlife (STS) in China. A total of 300 nonclinical participants collected from Mainland China and 240 nonclinical US participants were drawn from archival data. A Chinese version of the STS was developed, using translation and back-translation procedures. After confirmatory factor analysis (CFA) of the original STS sub scales failed on both samples, exploratory factor analysis (EFA) was then used to access whether a simple structure would emerge on these STS treatment items. Parallel analysis and minimum average partial were used to determine the number of factor to retain. Three cross-cultural factors were found in this study, Internalized Distress, Externalized Distress and interpersonal relations. This supported that regardless of whether one is in presumably different cultural contexts of the USA or China, psychological distress is expressed in a few basic channels of internalized distress, externalized distress, and interpersonal relations, from which different manifestations in different culture were also discussed.

  12. A Short Note on the Scaling Function Constant Problem in the Two-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Bothner, Thomas

    2018-02-01

    We provide a simple derivation of the constant factor in the short-distance asymptotics of the tau-function associated with the 2-point function of the two-dimensional Ising model. This factor was first computed by Tracy (Commun Math Phys 142:297-311, 1991) via an exponential series expansion of the correlation function. Further simplifications in the analysis are due to Tracy and Widom (Commun Math Phys 190:697-721, 1998) using Fredholm determinant representations of the correlation function and Wiener-Hopf approximation results for the underlying resolvent operator. Our method relies on an action integral representation of the tau-function and asymptotic results for the underlying Painlevé-III transcendent from McCoy et al. (J Math Phys 18:1058-1092, 1977).

  13. Spreading gossip in social networks.

    PubMed

    Lind, Pedro G; da Silva, Luciano R; Andrade, José S; Herrmann, Hans J

    2007-09-01

    We study a simple model of information propagation in social networks, where two quantities are introduced: the spread factor, which measures the average maximal reachability of the neighbors of a given node that interchange information among each other, and the spreading time needed for the information to reach such a fraction of nodes. When the information refers to a particular node at which both quantities are measured, the model can be taken as a model for gossip propagation. In this context, we apply the model to real empirical networks of social acquaintances and compare the underlying spreading dynamics with different types of scale-free and small-world networks. We find that the number of friendship connections strongly influences the probability of being gossiped. Finally, we discuss how the spread factor is able to be applied to other situations.

  14. Spreading gossip in social networks

    NASA Astrophysics Data System (ADS)

    Lind, Pedro G.; da Silva, Luciano R.; Andrade, José S., Jr.; Herrmann, Hans J.

    2007-09-01

    We study a simple model of information propagation in social networks, where two quantities are introduced: the spread factor, which measures the average maximal reachability of the neighbors of a given node that interchange information among each other, and the spreading time needed for the information to reach such a fraction of nodes. When the information refers to a particular node at which both quantities are measured, the model can be taken as a model for gossip propagation. In this context, we apply the model to real empirical networks of social acquaintances and compare the underlying spreading dynamics with different types of scale-free and small-world networks. We find that the number of friendship connections strongly influences the probability of being gossiped. Finally, we discuss how the spread factor is able to be applied to other situations.

  15. Rumination mediates the relationship between overgeneral autobiographical memory and depression in patients with major depressive disorder.

    PubMed

    Liu, Yansong; Yu, Xinnian; Yang, Bixiu; Zhang, Fuquan; Zou, Wenhua; Na, Aiguo; Zhao, Xudong; Yin, Guangzhong

    2017-03-21

    Overgeneral autobiographical memory has been identified as a risk factor for the onset and maintenance of depression. However, little is known about the underlying mechanisms that might explain overgeneral autobiographical memory phenomenon in depression. The purpose of this study was to test the mediation effects of rumination on the relationship between overgeneral autobiographical memory and depressive symptoms. Specifically, the mediation effects of brooding and reflection subtypes of rumination were examined in patients with major depressive disorder. Eighty-seven patients with major depressive disorder completed the 17-item Hamilton Depression Rating Scale, Ruminative Response Scale, and Autobiographical Memory Test. Bootstrap mediation analysis for simple and multiple mediation models through the PROCESS macro was applied. Simple mediation analysis showed that rumination significantly mediated the relationship between overgeneral autobiographical memory and depression symptoms. Multiple mediation analyses showed that brooding, but not reflection, significantly mediated the relationship between overgeneral autobiographical memory and depression symptoms. Our results indicate that global rumination partly mediates the relationship between overgeneral autobiographical memory and depressive symptoms in patients with major depressive disorder. Furthermore, the present results suggest that the mediating role of rumination in the relationship between overgeneral autobiographical memory and depression is mainly due to the maladaptive brooding subtype of rumination.

  16. A Developmental Framework for Complex Plasmodesmata Formation Revealed by Large-Scale Imaging of the Arabidopsis Leaf Epidermis[W

    PubMed Central

    Fitzgibbon, Jessica; Beck, Martina; Zhou, Ji; Faulkner, Christine; Robatzek, Silke; Oparka, Karl

    2013-01-01

    Plasmodesmata (PD) form tubular connections that function as intercellular communication channels. They are essential for transporting nutrients and for coordinating development. During cytokinesis, simple PDs are inserted into the developing cell plate, while during wall extension, more complex (branched) forms of PD are laid down. We show that complex PDs are derived from existing simple PDs in a pattern that is accelerated when leaves undergo the sink–source transition. Complex PDs are inserted initially at the three-way junctions between epidermal cells but develop most rapidly in the anisocytic complexes around stomata. For a quantitative analysis of complex PD formation, we established a high-throughput imaging platform and constructed PDQUANT, a custom algorithm that detected cell boundaries and PD numbers in different wall faces. For anticlinal walls, the number of complex PDs increased with increasing cell size, while for periclinal walls, the number of PDs decreased. Complex PD insertion was accelerated by up to threefold in response to salicylic acid treatment and challenges with mannitol. In a single 30-min run, we could derive data for up to 11k PDs from 3k epidermal cells. This facile approach opens the door to a large-scale analysis of the endogenous and exogenous factors that influence PD formation. PMID:23371949

  17. Climate change impact assessment on food security in Indonesia

    NASA Astrophysics Data System (ADS)

    Ettema, Janneke; Aldrian, Edvin; de Bie, Kees; Jetten, Victor; Mannaerts, Chris

    2013-04-01

    As Indonesia is the world's fourth most populous country, food security is a persistent challenge. The potential impact of future climate change on the agricultural sector needs to be addressed in order to allow early implementation of mitigation strategies. The complex island topography and local sea-land-air interactions cannot adequately be represented in large scale General Climate Models (GCMs) nor visualized by TRMM. Downscaling is needed. Using meteorological observations and a simple statistical downscaling tool, local future projections are derived from state-of-the-art, large-scale GCM scenarios, provided by the CMIP5 project. To support the agriculture sector, providing information on especially rainfall and temperature variability is essential. Agricultural production forecast is influenced by several rain and temperature factors, such as rainy and dry season onset, offset and length, but also by daily and monthly minimum and maximum temperatures and its rainfall amount. A simple and advanced crop model will be used to address the sensitivity of different crops to temperature and rainfall variability, present-day and future. As case study area, Java Island is chosen as it is fourth largest island in Indonesia but contains more than half of the nation's population and dominates it politically and economically. The objective is to identify regions at agricultural risk due to changing patterns in precipitation and temperature.

  18. Modeling Age-Related Differences in Immediate Memory Using SIMPLE

    ERIC Educational Resources Information Center

    Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.

    2006-01-01

    In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…

  19. Simple Numerical Analysis of Longboard Speedometer Data

    ERIC Educational Resources Information Center

    Hare, Jonathan

    2013-01-01

    Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

  20. Self-reported leisure time physical activity: a useful assessment tool in everyday health care.

    PubMed

    Rödjer, Lars; Jonsdottir, Ingibjörg H; Rosengren, Annika; Björck, Lena; Grimby, Gunnar; Thelle, Dag S; Lappas, Georgios; Börjesson, Mats

    2012-08-24

    The individual physical activity level is an independent risk factor for cardiovascular disease and death, as well as a possible target for improving health outcome. However, today's widely adopted risk score charts, typically do not include the level of physical activity. There is a need for a simple risk assessment tool, which includes a reliable assessment of the level of physical activity. The aim of this study was therefore, to analyse the association between the self-reported levels of physical activity, according to the Saltin-Grimby Physical Activity Level Scale (SGPALS) question, and cardiovascular risk factors, specifically focusing on the group of individuals with the lowest level of self-reported PA. We used cross sectional data from the Intergene study, a random sample of inhabitants from the western part of Sweden, totalling 3588 (1685 men and 1903 women, mean age 52 and 51). Metabolic measurements, including serum-cholesterol, serum-triglycerides, fasting plasma-glucose, waist circumference, blood pressure and resting heart rate, as well as smoking and self-reported stress were related to the self-reported physical activity level, according to the modernized version of the SGPALS 4-level scale. There was a strong negative association between the self-reported physical activity level, and smoking, weight, waist circumference, resting heart rate, as well as to the levels of fasting plasma-glucose, serum-triglycerides, low-density lipoproteins (LDL), and self-reported stress and a positive association with the levels of high-density lipoproteins (HDL). The individuals reporting the lowest level of PA (SGPALS, level 1) had the highest odds-ratios (OR) for having pre-defined levels of abnormal risk factors, such as being overweight (men OR 2.19, 95% CI: 1.51-3.19; women OR 2.57, 95 % CI: 1.78-3.73), having an increased waist circumference (men OR 3.76, 95 % CI: 2.61-5.43; women OR 2.91, 95% CI: 1.94-4.35) and for reporting stress (men OR 3.59, 95 % CI: 2.34-5.49; women OR 1.25, 95% CI: 0.79-1.98), compared to the most active individuals, but also showed increased OR for most other risk factors analyzed above. The self-reported PA-level according to the modernized Saltin-Grimby Physical Activity Level Scale, SGPALS, is associated with the presence of many cardiovascular risk factors, with the most inactive individuals having the highest risk factor profile, including self-reported stress. We propose that the present SGPALS may be used as an additional, simple tool in a routine risk assessment in e.g. primary care, to identify inactive individuals, with a higher risk profile.

  1. Self-reported leisure time physical activity: a useful assessment tool in everyday health care

    PubMed Central

    2012-01-01

    Background The individual physical activity level is an independent risk factor for cardiovascular disease and death, as well as a possible target for improving health outcome. However, today´s widely adopted risk score charts, typically do not include the level of physical activity. There is a need for a simple risk assessment tool, which includes a reliable assessment of the level of physical activity. The aim of this study was therefore, to analyse the association between the self-reported levels of physical activity, according to the Saltin-Grimby Physical Activity Level Scale (SGPALS) question, and cardiovascular risk factors, specifically focusing on the group of individuals with the lowest level of self-reported PA. Methods We used cross sectional data from the Intergene study, a random sample of inhabitants from the western part of Sweden, totalling 3588 (1685 men and 1903 women, mean age 52 and 51). Metabolic measurements, including serum-cholesterol, serum-triglycerides, fasting plasma-glucose, waist circumference, blood pressure and resting heart rate, as well as smoking and self-reported stress were related to the self-reported physical activity level, according to the modernized version of the SGPALS 4-level scale. Results There was a strong negative association between the self-reported physical activity level, and smoking, weight, waist circumference, resting heart rate, as well as to the levels of fasting plasma-glucose, serum-triglycerides, low-density lipoproteins (LDL), and self-reported stress and a positive association with the levels of high-density lipoproteins (HDL). The individuals reporting the lowest level of PA (SGPALS, level 1) had the highest odds-ratios (OR) for having pre-defined levels of abnormal risk factors, such as being overweight (men OR 2.19, 95% CI: 1.51-3.19; women OR 2.57, 95 % CI: 1.78-3.73), having an increased waist circumference (men OR 3.76, 95 % CI: 2.61-5.43; women OR 2.91, 95% CI: 1.94-4.35) and for reporting stress (men OR 3.59, 95 % CI: 2.34-5.49; women OR 1.25, 95% CI: 0.79-1.98), compared to the most active individuals, but also showed increased OR for most other risk factors analyzed above. Conclusion The self-reported PA-level according to the modernized Saltin-Grimby Physical Activity Level Scale, SGPALS, is associated with the presence of many cardiovascular risk factors, with the most inactive individuals having the highest risk factor profile, including self-reported stress. We propose that the present SGPALS may be used as an additional, simple tool in a routine risk assessment in e.g. primary care, to identify inactive individuals, with a higher risk profile. PMID:22920914

  2. Women and vulnerability to depression: some personality and clinical factors.

    PubMed

    Carrillo, Jesús M; Rojo, Nieves; Staats, Arthur W

    2004-05-01

    The purpose of this study is to explore the role of sex differences and personality in vulnerability to depression. Sex differences in personality and some clinical variables are described. We also assess the value of the variables that revealed significant sex differences as predictors of vulnerability to depression. In a group of adult participants (N = 112), 50% males and 50% females (mean age = 41.30; SD = 15.09; range 17-67), we studied sex differences in the three-factor personality model, using the Eysenck Personality Questionnaire, Form A (EPQ-A; Eysenck & Eysenck, 1975), and in the Five-Factor Personality Model, with the NEO Personality Inventory (NEO-PI; Costa & McCrae, 1985). The following clinical scales were used: the Beck Depression Inventory (BDI; Beck, Rush, Shaw, & Emery, 1979), the Schizotypy Questionnaire (STQ; Claridge & Broks, 1984; Spanish version, Carrillo & Rojo, 1999), the THARL Scales (Dua, 1989, 1990; Spanish version, Dua & Carrillo, 1994) and the Adjustment Inventory (Bell, 1937; Spanish version, Cerdá, 1980). Subsequently, simple linear regression analysis, with BDI scores as criterion, were performed to estimate the value of the variables as predictors of vulnerability to depression. The results indicate that a series of personality variables cause women to be more vulnerable to depression than men and that these variables could be explained by a negative emotion main factor. Results are discussed within the framework of the psychological behaviorism theory of depression.

  3. What does the Cantril Ladder measure in adolescence?

    PubMed

    Mazur, Joanna; Szkultecka-Dębek, Monika; Dzielska, Anna; Drozd, Mariola; Małkowska-Szkutnik, Agnieszka

    2018-01-01

    The Cantril Scale (CS) is a simple visual scale which makes it possible to assess general life satisfaction. The result may depend on the health, living, and studying conditions, and quality of social relations. The objective of this study is to identify key factors influencing the CS score in Polish adolescents. The survey comprised 1,423 parent-child pairs (54% girls; age range: 10-17; 67.3% urban inhabitants; 89.4% of parents were mothers). Linear and logistic models were estimated; the latter used alternative divisions into "satisfied" and "dissatisfied" with life. In addition to age and gender, child-reported KIDSCREEN-52 quality of life indexes were taken into account, along with some information provided by parents - child physical (CSHCN) and mental (SDQ) health, and family socio-economic conditions. According to the linear model, nine independent predictors, including six dimensions of KIDSCREEN-52, explain 47.2% of the variability of life satisfaction on the Cantril Scale. Self-perception was found to have a dominating influence (Δ R 2 = 0.301, p < 0.001). Important CS predictors also included Psychological Well-being (Δ R 2 = 0.088, p < 0.001) and Parent Relations (Δ R 2 = 0.041, p < 0.001). The impact of socioeconomic factors was more visible in boys and in older adolescents. According to logistic models, the key factors enhancing the chance of higher life satisfaction are Moods and Emotions (cut-off point CS > 5) and School Environment (CS > 8 points). None of the models indicated a relationship between the CS and physical health. The Cantril Scale can be considered a useful measurement tool in a broad approach to psychosocial adolescent health.

  4. Factors affecting healing rates after arthroscopic double-row rotator cuff repair.

    PubMed

    Tashjian, Robert Z; Hollins, Anthony M; Kim, Hyun-Min; Teefey, Sharlene A; Middleton, William D; Steger-May, Karen; Galatz, Leesa M; Yamaguchi, Ken

    2010-12-01

    Double-row arthroscopic rotator cuff repairs were developed to improve initial biomechanical strength of repairs to improve healing rates. Despite biomechanical improvements, failure of healing remains a clinical problem. To evaluate the anatomical results after double-row arthroscopic rotator cuff repair with ultrasound to determine postoperative repair integrity and the effect of various factors on tendon healing. Case series; Level of evidence, 4. Forty-eight patients (49 shoulders) who had a complete arthroscopic rotator cuff repair (double-row technique) were evaluated with ultrasound at a minimum of 6 months after surgery. Outcome was evaluated at a minimum of 1-year follow-up with standardized history and physical examination, visual analog scale for pain, active forward elevation, and preoperative and postoperative shoulder scores according to the system of the American Shoulder and Elbow Surgeons and the Simple Shoulder Test. Quantitative strength was measured postoperatively. Ultrasound and physical examinations were performed at a minimum of 6 months after surgery (mean, 16 months; range, 6 to 36 months) and outcome questionnaire evaluations at a minimum of 12 months after surgery (mean, 29 months; range, 12 to 55 months). Of 49 repairs, 25 (51%) were healed. Healing rates were 67% in single-tendon tears (16 of 24 shoulders) and 36% in multitendon tears (9 of 25 shoulders). Older age and longer duration of follow-up were correlated with poorer tendon healing (P < .03). Visual analog scale for pain, active forward elevation, American Shoulder and Elbow Surgeons scores, and Simple Shoulder Test scores all had significant improvement from baseline after repair (P < .0001). Increased age and longer duration of follow-up were associated with lower healing rates after double-row rotator cuff repair. The biological limitation at the repair site, as reflected by the effects of age on healing, appears to be the most important factor influencing tendon healing, even after maximizing repair biomechanical strength with a double-row construct.

  5. Generation of scale invariant magnetic fields in bouncing universes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sriramkumar, L.; Atmjeet, Kumar; Jain, Rajeev Kumar, E-mail: sriram@physics.iitm.ac.in, E-mail: katmjeet@physics.du.ac.in, E-mail: jain@cp3.dias.sdu.dk

    2015-09-01

    We consider the generation of primordial magnetic fields in a class of bouncing models when the electromagnetic action is coupled non-minimally to a scalar field that, say, drives the background evolution. For scale factors that have the power law form at very early times and non-minimal couplings which are simple powers of the scale factor, one can easily show that scale invariant spectra for the magnetic field can arise before the bounce for certain values of the indices involved. It will be interesting to examine if these power spectra retain their shape after the bounce. However, analytical solutions for themore » Fourier modes of the electromagnetic vector potential across the bounce are difficult to obtain. In this work, with the help of a new time variable that we introduce, which we refer to as the e-N-fold, we investigate these scenarios numerically. Imposing the initial conditions on the modes in the contracting phase, we numerically evolve the modes across the bounce and evaluate the spectra of the electric and magnetic fields at a suitable time after the bounce. As one could have intuitively expected, though the complete spectra depend on the details of the bounce, we find that, under the original conditions, scale invariant spectra of the magnetic fields do arise for wavenumbers much smaller than the scale associated with the bounce. We also show that magnetic fields which correspond to observed strengths today can be generated for specific values of the parameters. But, we find that, at the bounce, the backreaction due to the electromagnetic modes that have been generated can be significantly large calling into question the viability of the model. We briefly discuss the implications of our results.« less

  6. A grid of MHD models for stellar mass loss and spin-down rates of solar analogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, O.; Drake, J. J.

    2014-03-01

    Stellar winds are believed to be the dominant factor in the spin-down of stars over time. However, stellar winds of solar analogs are poorly constrained due to observational challenges. In this paper, we present a grid of magnetohydrodynamic models to study and quantify the values of stellar mass loss and angular momentum loss rates as a function of the stellar rotation period, magnetic dipole component, and coronal base density. We derive simple scaling laws for the loss rates as a function of these parameters, and constrain the possible mass loss rate of stars with thermally driven winds. Despite the successmore » of our scaling law in matching the results of the model, we find a deviation between the 'solar dipole' case and a real case based on solar observations that overestimates the actual solar mass loss rate by a factor of three. This implies that the model for stellar fields might require a further investigation with additional complexity. Mass loss rates in general are largely controlled by the magnetic field strength, with the wind density varying in proportion to the confining magnetic pressure B {sup 2}. We also find that the mass loss rates obtained using our grid models drop much faster with the increase in rotation period than scaling laws derived using observed stellar activity. For main-sequence solar-like stars, our scaling law for angular momentum loss versus poloidal magnetic field strength retrieves the well-known Skumanich decline of angular velocity with time, Ω{sub *}∝t {sup –1/2}, if the large-scale poloidal magnetic field scales with rotation rate as B{sub p}∝Ω{sub ⋆}{sup 2}.« less

  7. Knowledge and prevalence of risk factors for arterial hypertension and blood pressure pattern among bankers and traffic wardens in Ilorin, Nigeria.

    PubMed

    Salaudeen, A G; Musa, O I; Babatunde, O A; Atoyebi, O A; Durowade, K A; Omokanye, L O

    2014-09-01

    High job strain, mental stress, sedentary lifestyle, increase in BMI are among the factors associated with significantly higher incidence of hypertension. The job of bank employees is both sedentary in nature and accompanies high mental stress. The aim of this study is to assess the level of knowledge of risk factors among respondents and to compare the blood pressure pattern of bankers and traffic wardens. The study design is a descriptive cross-sectional conducted among bankers and traffic wardens in Ilorin to determine the pattern and knowledge of blood pressure. Self-administered questionnaires, weighing scale (Omron Digital scale), stadiometer and sphygmomanometer were used as the research instruments. Simple random sampling was used to select respondents involved in the study. The prevalence of hypertension in this study was 34.4% in bankers and 22.2% in traffic wardens. The risk factors the bankers commonly had knowledge of are alcohol, obesity, high salt intake, certain drugs, stress, emotional problems and family history while the traffic wardens commonly had knowledge of all these in addition to cigarette smoking. Also, more bankers (32.2%) than traffic wardens (13.3%) were smoking cigarette and more of these cigarette smokers that are bankers (17.8%) had elevated blood pressure compared to the traffic wardens (3.3%). Workers in the banking industry as well as traffic wardens should be better educated about the risk factors of hypertension and bankers should be encouraged to create time for exercise.

  8. Factors affecting metacognition of undergraduate nursing students in a blended learning environment.

    PubMed

    Hsu, Li-Ling; Hsieh, Suh-Ing

    2014-06-01

    This paper is a report of a study to examine the influence of demographic, learning involvement and learning performance variables on metacognition of undergraduate nursing students in a blended learning environment. A cross-sectional, correlational survey design was adopted. Ninety-nine students invited to participate in the study were enrolled in a professional nursing ethics course at a public nursing college. The blended learning intervention is basically an assimilation of classroom learning and online learning. Simple linear regression showed significant associations between frequency of online dialogues, the Case Analysis Attitude Scale scores, the Case Analysis Self Evaluation Scale scores, the Blended Learning Satisfaction Scale scores, and Metacognition Scale scores. Multiple linear regression indicated that frequency of online dialogues, the Case Analysis Self Evaluation Scale and the Blended Learning Satisfaction Scale were significant independent predictors of metacognition. Overall, the model accounted for almost half of the variance in metacognition. The blended learning module developed in this study proved successful in the end as a catalyst for the exercising of metacognitive abilities by the sample of nursing students. Learners are able to develop metacognitive ability in comprehension, argumentation, reasoning and various forms of higher order thinking through the blended learning process. © 2013 Wiley Publishing Asia Pty Ltd.

  9. [Study of functional rating scale for amyotrophic lateral sclerosis: revised ALSFRS(ALSFRS-R) Japanese version].

    PubMed

    Ohashi, Y; Tashiro, K; Itoyama, Y; Nakano, I; Sobue, G; Nakamura, S; Sumino, S; Yanagisawa, N

    2001-04-01

    Amyotrophic lateral sclerosis(ALS) is progressive, degenerative, fatal disease of the motor neuron. No efficacious therapy is available to slow the progressive loss of function, but several new approaches including neurotrophic factors, antioxidants and glutamate antagonists, are currently being evaluated as potential therapies. Mortality, and/or time to tracheostomy, muscle strength and pulmonary function are used as primary endpoints in clinical trials for treatment of ALS. The effect of new therapies on the quality of patients' lives are also important, so we sought to develop a rating scale to measure it. The revised ALS Functional Rating Scale(ALSFRS-R), which has addition of items to ALSFRS to enhance the ability to assess respiratory symptoms, is an assessment determining the degree of impairment in ALS patients' abilities to function independently in activities of daily living. It consists of 12 items to evaluate bulbar function, motor function and respiratory function and each item is scored from 0(unable) to 4(normal). We translated the English score into Japanese one with minor modification considering the inter cultural difference. And we examined reliability of the translated scale. As a measure of reliability, the intraclass correlation coefficient(ICC) was evaluated for total score and the Kappa coefficient proposed by Cohen and Kraemer was calculated for each item. Moreover, we examined sensitivity to clinical change over time and carried out the factor analysis to analyze the factorial structure. The subjects were 27 ALS patients and each was scored twice for reliability or three times for sensitivity by 2 to 5 neurologists and if possible, nurses. The ICC for total score was 0.97(95% C. I.; 0.94-0.98). Extension of the Kappa coefficients were 0.48 to 1.00 for inter-rater reliability and the averaged Kappa coefficients were 0.63 to 1.00 for intra rater reliability, respectively. Concerning the factorial structure, the contribution of the first factor(the first principal component) were 53.5% principal factor solution. The factor loadings of items were 0.52-0.91 except "salivation" and this factor almost equal to the simple sum of all items was interpreted as the general degree of deterioration. The promax votation revealed the riginally supposed factor structure with 3 factors(groups of items): neuromuscuclar function, respiratory function and bulbar function. The rating scale correlated with Global clinical impression of change(GCIC) scored by neurologists and declined with time, indicating its sensitivity to change. On the bases of these results, ALSFRS-R(Japanese version) is considered to be highly reliable enough for clinical use.

  10. An analysis of ratings: A guide to RMRATE

    Treesearch

    Thomas C. Brown; Terry C. Daniel; Herbert W. Schroeder; Glen E. Brink

    1990-01-01

    This report describes RMRATE, a computer program for analyzing rating judgments. RMRATE scales ratings using several scaling procedures, and compares the resulting scale values. The scaling procedures include the median and simple mean, standardized values, scale values based on Thurstone's Law of Categorical Judgment, and regression-based values. RMRATE also...

  11. Fifty years with the Hamilton scales for anxiety and depression. A tribute to Max Hamilton.

    PubMed

    Bech, P

    2009-01-01

    From the moment Max Hamilton started his psychiatric education, he considered psychometrics to be a scientific discipline on a par with biochemistry or pharmacology in clinical research. His clinimetric skills were in operation in the 1950s when randomised clinical trials were established as the method for the evaluation of the clinical effects of psychotropic drugs. Inspired by Eysenck, Hamilton took the long route around factor analysis in order to qualify his scales for anxiety (HAM-A) and depression (HAM-D) as scientific tools. From the moment when, 50 years ago, Hamilton published his first placebo-controlled trial with an experimental anti-anxiety drug, he realized the dialectic problem in using the total score on HAM-A as a sufficient statistic for the measurement of outcome. This dialectic problem has been investigated for more than 50 years with different types of factor analyses without success. Using modern psychometric methods, the solution to this problem is a simple matter of reallocating the Hamilton scale items according to the scientific hypothesis under examination. Hamilton's original intention, to measure the global burden of the symptoms experienced by the patients with affective disorders, is in agreement with the DSM-IV and ICD-10 classification systems. Scale reliability and obtainment of valid information from patients and their relatives were the most important clinimetric innovations to be developed by Hamilton. Max Hamilton therefore belongs to the very exclusive family of eminent physicians celebrated by this journal with a tribute. 2009 S. Karger AG, Basel.

  12. A simple framework for relating variations in runoff to variations in climatic conditions and catchment properties

    NASA Astrophysics Data System (ADS)

    Roderick, Michael L.; Farquhar, Graham D.

    2011-12-01

    We use the Budyko framework to calculate catchment-scale evapotranspiration (E) and runoff (Q) as a function of two climatic factors, precipitation (P) and evaporative demand (Eo = 0.75 times the pan evaporation rate), and a third parameter that encodes the catchment properties (n) and modifies how P is partitioned between E and Q. This simple theory accurately predicted the long-term evapotranspiration (E) and runoff (Q) for the Murray-Darling Basin (MDB) in southeast Australia. We extend the theory by developing a simple and novel analytical expression for the effects on E and Q of small perturbations in P, Eo, and n. The theory predicts that a 10% change in P, with all else constant, would result in a 26% change in Q in the MDB. Future climate scenarios (2070-2099) derived using Intergovernmental Panel on Climate Change AR4 climate model output highlight the diversity of projections for P (±30%) with a correspondingly large range in projections for Q (±80%) in the MDB. We conclude with a qualitative description about the impact of changes in catchment properties on water availability and focus on the interaction between vegetation change, increasing atmospheric [CO2], and fire frequency. We conclude that the modern version of the Budyko framework is a useful tool for making simple and transparent estimates of changes in water availability.

  13. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    PubMed

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  14. A simple atomic-level hydrophobicity scale reveals protein interfacial structure.

    PubMed

    Kapcha, Lauren H; Rossky, Peter J

    2014-01-23

    Many amino acid residue hydrophobicity scales have been created in an effort to better understand and rapidly characterize water-protein interactions based only on protein structure and sequence. There is surprisingly low consistency in the ranking of residue hydrophobicity between scales, and their ability to provide insightful characterization varies substantially across subject proteins. All current scales characterize hydrophobicity based on entire amino acid residue units. We introduce a simple binary but atomic-level hydrophobicity scale that allows for the classification of polar and non-polar moieties within single residues, including backbone atoms. This simple scale is first shown to capture the anticipated hydrophobic character for those whole residues that align in classification among most scales. Examination of a set of protein binding interfaces establishes good agreement between residue-based and atomic-level descriptions of hydrophobicity for five residues, while the remaining residues produce discrepancies. We then show that the atomistic scale properly classifies the hydrophobicity of functionally important regions where residue-based scales fail. To illustrate the utility of the new approach, we show that the atomic-level scale rationalizes the hydration of two hydrophobic pockets and the presence of a void in a third pocket within a single protein and that it appropriately classifies all of the functionally important hydrophilic sites within two otherwise hydrophobic pores. We suggest that an atomic level of detail is, in general, necessary for the reliable depiction of hydrophobicity for all protein surfaces. The present formulation can be implemented simply in a manner no more complex than current residue-based approaches. © 2013.

  15. A Self-Reported Adherence Measure to Screen for Elevated HIV Viral Load in Pregnant and Postpartum Women on Antiretroviral Therapy

    PubMed Central

    Brittain, Kirsty; Mellins, Claude A.; Zerbe, Allison; Remien, Robert H.; Abrams, Elaine J.; Myer, Landon; Wilson, Ira B.

    2016-01-01

    Maternal adherence to antiretroviral therapy (ART) is a concern and monitoring adherence presents a significant challenge in low-resource settings. We investigated the association between self-reported adherence, measured using a simple three-item scale, and elevated viral load (VL) among HIV-infected pregnant and postpartum women on ART in Cape Town, South Africa. This is the first reported use of this scale in a non-English speaking setting and it achieved good psychometric characteristics (Cronbach α = 0.79). Among 452 women included in the analysis, only 12 % reported perfect adherence on the self-report scale, while 92 % had a VL <1000 copies/mL. Having a raised VL was consistently associated with lower median adherence scores and the area under the curve for the scale was 0.599, 0.656 and 0.642 using a VL cut-off of ≥50, ≥1000 and ≥10000 copies/mL, respectively. This simple self-report adherence scale shows potential as a first-stage adherence screener in this setting. Maternal adherence monitoring in low resource settings requires attention in the era of universal ART, and the value of this simple adherence scale in routine ART care settings warrants further investigation. PMID:27278548

  16. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  17. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-03-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shaanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modeled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modeled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI), elevation and aspect have small and additive effects on improving the spatial scaling between these two resolutions.

  18. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-07-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modelled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modelled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI) and elevation have small and additive effects on improving the spatial scaling between these two resolutions.

  19. How well can regional fluxes be derived from smaller-scale estimates?

    NASA Technical Reports Server (NTRS)

    Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.

    1992-01-01

    Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.

  20. A simple landslide susceptibility analysis for hazard and risk assessment in developing countries

    NASA Astrophysics Data System (ADS)

    Guinau, M.; Vilaplana, J. M.

    2003-04-01

    In recent years, a number of techniques and methodologies have been developed for mitigating natural disasters. The complexity of these methodologies and the scarcity of material and data series justify the need for simple methodologies to obtain the necessary information for minimising the effects of catastrophic natural phenomena. The work with polygonal maps using a GIS allowed us to develop a simple methodology, which was developed in an area of 473 Km2 in the Departamento de Chinandega (NW Nicaragua). This area was severely affected by a large number of landslides (mainly debris flows), triggered by the Hurricane Mitch rainfalls in October 1998. With the aid of aerial photography interpretation at 1:40.000 scale, amplified to 1:20.000, and detailed field work, a landslide map at 1:10.000 scale was constructed. The failure zones of landslides were digitized in order to obtain a failure zone digital map. A terrain unit digital map, in which a series of physical-environmental terrain factors are represented, was also used. Dividing the studied area into two zones (A and B) with homogeneous physical and environmental characteristics, allows us to develop the proposed methodology and to validate it. In zone A, the failure zone digital map is superimposed onto the terrain unit digital map to establish the relationship between the different terrain factors and the failure zones. The numerical expression of this relationship enables us to classify the terrain by its landslide susceptibility. In zone B, this numerical relationship was employed to obtain a landslide susceptibility map, obviating the need for a failure zone map. The validity of the methodology can be tested in this area by using the degree of superposition of the susceptibility map and the failure zone map. The implementation of the methodology in tropical countries with physical and environmental characteristics similar to those of the study area allows us to carry out a landslide susceptibility analysis in areas where landslide records do not exist. This analysis is essential to landslide hazard and risk assessment, which is necessary to determine the actions for mitigating landslide effects, e.g. land planning, emergency aid actions, etc.

  1. Characteristics of dental fear among Arabic-speaking children: a descriptive study.

    PubMed

    El-Housseiny, Azza A; Alamoudi, Najlaa M; Farsi, Najat M; El Derwi, Douaa A

    2014-09-22

    Dental fear has not only been linked to poor dental health in children but also persists across the lifespan, if unaddressed, and can continue to affect oral, systemic, and psychological health. The aim of this study was to assess the factor structure of the Arabic version of the Children's Fear Survey Schedule-Dental Subscale (CFSS-DS), and to assess the difference in factor structure between boys and girls. Participants were 220 consecutive paediatric dental patients 6-12 years old seeking dental care at the Faculty of Dentistry, King Abdulaziz University, Saudi Arabia. Participants completed the 15-item Arabic version of the CFSS-DS questionnaire at the end of the visit. Internal consistency was assessed using Cronbach's alpha. Factor analysis (principal components, varimax rotation) was employed to assess the factor structure of the scale. The Cronbach's alpha was 0.86. Four factors with eigenvalues above 1.00 were identified, which collectively explained 64.45% of the variance. These factors were as follows: Factor 1, 'fear of usual dental procedures' consisted of 8 items such as 'drilling' and 'having to open the mouth', Factor 2, 'fear of health care personnel and injections' consisted of three items, Factor 3, 'fear of strangers', consisted of 2 items. Factor 4, 'fear of general medical aspects of treatment', consisted of 2 items. Notably, four factors of dental fear were found in girls, while five were found in boys. Four factors of different strength pertaining to dental fear were identified in Arabic-speaking children, indicating a simple structure. Most items loaded high on the factor related to fear of usual dental procedures. The fear-provoking aspects of dental procedures differed in boys and girls. Use of the scale may enable dentists to determine the item/s of dental treatment that a given child finds most fear-provoking and guide the child's behaviour accordingly.

  2. Finite Element Method (FEM) Modeling of Freeze-drying: Monitoring Pharmaceutical Product Robustness During Lyophilization.

    PubMed

    Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V

    2015-12-01

    Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.

  3. Dispersion/dilution enhances phytoplankton blooms in low-nutrient waters

    NASA Astrophysics Data System (ADS)

    Lehahn, Yoav; Koren, Ilan; Sharoni, Shlomit; D'Ovidio, Francesco; Vardi, Assaf; Boss, Emmanuel

    2017-03-01

    Spatial characteristics of phytoplankton blooms often reflect the horizontal transport properties of the oceanic turbulent flow in which they are embedded. Classically, bloom response to horizontal stirring is regarded in terms of generation of patchiness following large-scale bloom initiation. Here, using satellite observations from the North Pacific Subtropical Gyre and a simple ecosystem model, we show that the opposite scenario of turbulence dispersing and diluting fine-scale (~1-100 km) nutrient-enriched water patches has the critical effect of regulating the dynamics of nutrients-phytoplankton-zooplankton ecosystems and enhancing accumulation of photosynthetic biomass in low-nutrient oceanic environments. A key factor in determining ecological and biogeochemical consequences of turbulent stirring is the horizontal dilution rate, which depends on the effective eddy diffusivity and surface area of the enriched patches. Implementation of the notion of horizontal dilution rate explains quantitatively plankton response to turbulence and improves our ability to represent ecological and biogeochemical processes in oligotrophic oceans.

  4. Micropropagation of African violet (Saintpaulia ionantha Wendl.).

    PubMed

    Shukla, Mukund; Sullivan, J Alan; Jain, Shri Mohan; Murch, Susan J; Saxena, Praveen K

    2013-01-01

    Micropropagation is an important tool for rapid multiplication and the creation of genetic variability in African violets (Saintpaulia ionantha Wendl.). Successful in vitro propagation depends on the specific requirements and precise manipulation of various factors such as the type of explants used, physiological state of the mother plant, plant growth regulators in the culture medium, and growth conditions. Development of cost-effective protocols with a high rate of multiplication is a crucial requirement for commercial application of micropropagation. The current chapter describes an optimized protocol for micropropagation of African violets using leaf explants obtained from in vitro grown plants. In this process, plant regeneration occurs via both somatic embryogenesis and shoot organogenesis simultaneously in the explants induced with the growth regulator thidiazuron (TDZ; N-phenyl-N'-1,2,3-thidiazol-5-ylurea). The protocol is simple, rapid, and efficient for large-scale propagation of African violet and the dual routes of regeneration allow for multiple applications of the technology from simple clonal propagation to induction or selection of variants to the production of synthetic seeds.

  5. Immersion frying for the thermal drying of sewage sludge: an economic assessment.

    PubMed

    Peregrina, Carlos; Rudolph, Victor; Lecomte, Didier; Arlabosse, Patricia

    2008-01-01

    This paper presents an economic study of a novel thermal fry-drying technology which transforms sewage sludge and recycled cooking oil (RCO) into a solid fuel. The process is shown to have significant potential advantage in terms of capital costs (by factors of several times) and comparable operating costs. Three potential variants of the process have been simulated and costed in terms of both capital and operating requirements for a commercial scale of operation. The differences are in the energy recovery systems, which include a simple condensation of the evaporated water and two different heat pump configurations. Simple condensation provides the simplest process, but the energy efficiency gain of an open heat pump offset this, making it economically somewhat more attractive. In terms of operating costs, current sludge dryers are dominated by maintenance and energy requirements, while for fry-drying these are comparatively small. Fry-drying running costs are dominated by provision of makeup waste oil. Cost reduction could focus on cheaper waste oil, e.g. from grease trap waste.

  6. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  7. BDA special care case mix model.

    PubMed

    Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L

    2010-04-10

    Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.

  8. Large Scale Synthesis of Colloidal Si Nanocrystals and their Helium Plasma Processing into Spin-On, Carbon-Free Nanocrystalline Si Films.

    PubMed

    Mohapatra, Pratyasha; Mendivelso-Perez, Deyny; Bobbitt, Jonathan M; Shaw, Santosh; Yuan, Bin; Tian, Xinchun; Smith, Emily A; Cademartiri, Ludovico

    2018-05-30

    This paper describes a simple approach to the large scale synthesis of colloidal Si nanocrystals and their processing by He plasma into spin-on carbon-free nanocrystalline Si films. We further show that the RIE etching rate in these films is 1.87 times faster than for single crystalline Si, consistent with a simple geometric argument that accounts for the nanoscale roughness caused by the nanoparticle shape.

  9. Factors associated to quality of life in active elderly.

    PubMed

    Alexandre, Tiago da Silva; Cordeiro, Renata Cereda; Ramos, Luiz Roberto

    2009-08-01

    To analyze whether quality of life in active, healthy elderly individuals is influenced by functional status and sociodemographic characteristics, as well as psychological parameters. Study conducted in a sample of 120 active elderly subjects recruited from two open universities of the third age in the cities of São Paulo and São José dos Campos (Southeastern Brazil) between May 2005 and April 2006. Quality of life was measured using the abbreviated Brazilian version of the World Health Organization Quality of Live (WHOQOL-bref) questionnaire. Sociodemographic, clinical and functional variables were measured through crossculturally validated assessments by the Mini Mental State Examination, Geriatric Depression Scale, Functional Reach, One-Leg Balance Test, Timed Up and Go Test, Six-Minute Walk Test, Human Activity Profile and a complementary questionnaire. Simple descriptive analyses, Pearson's correlation coefficient, Student's t-test for non-related samples, analyses of variance, linear regression analyses and variance inflation factor were performed. The significance level for all statistical tests was set at 0.05. Linear regression analysis showed an independent correlation without colinearity between depressive symptoms measured by the Geriatric Depression Scale and four domains of the WHOQOL-bref. Not having a conjugal life implied greater perception in the social domain; developing leisure activities and having an income over five minimum wages implied greater perception in the environment domain. Functional status had no influence on the Quality of Life variable in the analysis models in active elderly. In contrast, psychological factors, as assessed by the Geriatric Depression Scale, and sociodemographic characteristics, such as marital status, income and leisure activities, had an impact on quality of life.

  10. A single scaling parameter as a first approximation to describe the rainfall pattern of a place: application on Catalonia

    NASA Astrophysics Data System (ADS)

    Casas-Castillo, M. Carmen; Llabrés-Brustenga, Alba; Rius, Anna; Rodríguez-Solà, Raúl; Navarro, Xavier

    2018-02-01

    As well as in other natural processes, it has been frequently observed that the phenomenon arising from the rainfall generation process presents fractal self-similarity of statistical type, and thus, rainfall series generally show scaling properties. Based on this fact, there is a methodology, simple scaling, which is used quite broadly to find or reproduce the intensity-duration-frequency curves of a place. In the present work, the relationship of the simple scaling parameter with the characteristic rainfall pattern of the area of study has been investigated. The calculation of this scaling parameter has been performed from 147 daily rainfall selected series covering the temporal period between 1883 and 2016 over the Catalonian territory (Spain) and its nearby surroundings, and a discussion about the relationship between the scaling parameter spatial distribution and rainfall pattern, as well as about trends of this scaling parameter over the past decades possibly due to climate change, has been presented.

  11. Validation of a new simple scale to measure symptoms in atrial fibrillation: the Canadian Cardiovascular Society Severity in Atrial Fibrillation scale.

    PubMed

    Dorian, Paul; Guerra, Peter G; Kerr, Charles R; O'Donnell, Suzan S; Crystal, Eugene; Gillis, Anne M; Mitchell, L Brent; Roy, Denis; Skanes, Allan C; Rose, M Sarah; Wyse, D George

    2009-06-01

    Atrial fibrillation (AF) is commonly associated with impaired quality of life. There is no simple validated scale to quantify the functional illness burden of AF. The Canadian Cardiovascular Society Severity in Atrial Fibrillation (CCS-SAF) scale is a bedside scale that ranges from class 0 to 4, from no effect on functional quality of life to a severe effect on life quality. This study was performed to validate the scale. In 484 patients with documented AF (62.2+/-12.5 years of age, 67% men; 62% paroxysmal and 38% persistent/permanent), the SAF class was assessed and 2 validated quality-of-life questionnaires were administered: the SF-36 generic scale and the disease-specific AFSS (University of Toronto Atrial Fibrillation Severity Scale). There is a significant linear graded correlation between the SAF class and measures of symptom severity, physical and emotional components of quality of life, general well-being, and health care consumption related to AF. Patients with SAF class 0 had age- and sex-standardized SF-36 scores of 0.15+/-0.16 and -0.04+/-0.31 (SD units), that is, units away from the mean population score for the mental and physical summary scores, respectively. For each unit increase in SAF class, there is a 0.36 and 0.40 SD unit decrease in the SF-36 score for the physical and mental components. As the SAF class increases from 0 to 4, the symptom severity score (range, 0 to 35) increases from 4.2+/-5.0 to 18.4+/-7.8 (P<0.0001). The CCS-SAF scale is a simple semiquantitative scale that closely approximates patient-reported subjective measures of quality of life in AF and may be practical for clinical use.

  12. Neutrino masses from neutral top partners

    NASA Astrophysics Data System (ADS)

    Batell, Brian; McCullough, Matthew

    2015-10-01

    We present theories of "natural neutrinos" in which neutral fermionic top partner fields are simultaneously the right-handed neutrinos (RHN), linking seemingly disparate aspects of the Standard Model structure: (a) The RHN top partners are responsible for the observed small neutrino masses, (b) they help ameliorate the tuning in the weak scale and address the little hierarchy problem, and (c) the factor of 3 arising from Nc in the top-loop Higgs mass corrections is countered by a factor of 3 from the number of vectorlike generations of RHN. The RHN top partners may arise in pseudo-Nambu-Goldstone-Boson Higgs models such as the twin Higgs, as well as more general composite, little, and orbifold Higgs scenarios, and three simple example models are presented. This framework firmly predicts a TeV-scale seesaw, as the RHN masses are bounded to be below the TeV scale by naturalness. The generation of light neutrino masses relies on a collective breaking of the lepton number, allowing for comparatively large neutrino Yukawa couplings and a rich associated phenomenology. The structure of the neutrino mass mechanism realizes in certain limits the inverse or linear classes of seesaw. Natural neutrino models are testable at a variety of current and future experiments, particularly in tests of lepton universality, searches for lepton flavor violation, and precision electroweak and Higgs coupling measurements possible at high energy e+e- and hadron colliders.

  13. Dynamical Mass Measurements of Contaminated Galaxy Clusters Using Support Distribution Machines

    NASA Astrophysics Data System (ADS)

    Ntampaka, Michelle; Trac, Hy; Sutherland, Dougal; Fromenteau, Sebastien; Poczos, Barnabas; Schneider, Jeff

    2018-01-01

    We study dynamical mass measurements of galaxy clusters contaminated by interlopers and show that a modern machine learning (ML) algorithm can predict masses by better than a factor of two compared to a standard scaling relation approach. We create two mock catalogs from Multidark’s publicly available N-body MDPL1 simulation, one with perfect galaxy cluster membership infor- mation and the other where a simple cylindrical cut around the cluster center allows interlopers to contaminate the clusters. In the standard approach, we use a power-law scaling relation to infer cluster mass from galaxy line-of-sight (LOS) velocity dispersion. Assuming perfect membership knowledge, this unrealistic case produces a wide fractional mass error distribution, with a width E=0.87. Interlopers introduce additional scatter, significantly widening the error distribution further (E=2.13). We employ the support distribution machine (SDM) class of algorithms to learn from distributions of data to predict single values. Applied to distributions of galaxy observables such as LOS velocity and projected distance from the cluster center, SDM yields better than a factor-of-two improvement (E=0.67) for the contaminated case. Remarkably, SDM applied to contaminated clusters is better able to recover masses than even the scaling relation approach applied to uncon- taminated clusters. We show that the SDM method more accurately reproduces the cluster mass function, making it a valuable tool for employing cluster observations to evaluate cosmological models.

  14. Is the Jeffreys' scale a reliable tool for Bayesian model comparison in cosmology?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesseris, Savvas; García-Bellido, Juan, E-mail: savvas.nesseris@uam.es, E-mail: juan.garciabellido@uam.es

    2013-08-01

    We are entering an era where progress in cosmology is driven by data, and alternative models will have to be compared and ruled out according to some consistent criterium. The most conservative and widely used approach is Bayesian model comparison. In this paper we explicitly calculate the Bayes factors for all models that are linear with respect to their parameters. We do this in order to test the so called Jeffreys' scale and determine analytically how accurate its predictions are in a simple case where we fully understand and can calculate everything analytically. We also discuss the case of nestedmore » models, e.g. one with M{sub 1} and another with M{sub 2} superset of M{sub 1} parameters and we derive analytic expressions for both the Bayes factor and the figure of Merit, defined as the inverse area of the model parameter's confidence contours. With all this machinery and the use of an explicit example we demonstrate that the threshold nature of Jeffreys' scale is not a ''one size fits all'' reliable tool for model comparison and that it may lead to biased conclusions. Furthermore, we discuss the importance of choosing the right basis in the context of models that are linear with respect to their parameters and how that basis affects the parameter estimation and the derived constraints.« less

  15. Measuring Networking as an Outcome Variable in Undergraduate Research Experiences

    PubMed Central

    Hanauer, David I.; Hatfull, Graham

    2015-01-01

    The aim of this paper is to propose, present, and validate a simple survey instrument to measure student conversational networking. The tool consists of five items that cover personal and professional social networks, and its basic principle is the self-reporting of degrees of conversation, with a range of specific discussion partners. The networking instrument was validated in three studies. The basic psychometric characteristics of the scales were established by conducting a factor analysis and evaluating internal consistency using Cronbach’s alpha. The second study used a known-groups comparison and involved comparing outcomes for networking scales between two different undergraduate laboratory courses (one involving a specific effort to enhance networking). The final study looked at potential relationships between specific networking items and the established psychosocial variable of project ownership through a series of binary logistic regressions. Overall, the data from the three studies indicate that the networking scales have high internal consistency (α = 0.88), consist of a unitary dimension, can significantly differentiate between research experiences with low and high networking designs, and are related to project ownership scales. The ramifications of the networking instrument for student retention, the enhancement of public scientific literacy, and the differentiation of laboratory courses are discussed. PMID:26538387

  16. A natural language screening measure for motivation to change.

    PubMed

    Miller, William R; Johnson, Wendy R

    2008-09-01

    Client motivation for change, a topic of high interest to addiction clinicians, is multidimensional and complex, and many different approaches to measurement have been tried. The current effort drew on psycholinguistic research on natural language that is used by clients to describe their own motivation. Seven addiction treatment sites participated in the development of a simple scale to measure client motivation. Twelve items were drafted to represent six potential dimensions of motivation for change that occur in natural discourse. The maximum self-rating of motivation (10 on a 0-10 scale) was the median score on all items, and 43% of respondents rated 10 on all 12 items - a substantial ceiling effect. From 1035 responses, three factors emerged representing importance, ability, and commitment - constructs that are also reflected in several theoretical models of motivation. A 3-item version of the scale, with one marker item for each of these constructs, accounted for 81% of variance in the full scale. The three items are: 1. It is important for me to . . . 2. I could . . . and 3. I am trying to . . . This offers a quick (1-minute) assessment of clients' self-reported motivation for change.

  17. Ultra-high-Q phononic resonators on-chip at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Kharel, Prashanta; Chu, Yiwen; Power, Michael; Renninger, William H.; Schoelkopf, Robert J.; Rakich, Peter T.

    2018-06-01

    Long-lived, high-frequency phonons are valuable for applications ranging from optomechanics to emerging quantum systems. For scientific as well as technological impact, we seek high-performance oscillators that offer a path toward chip-scale integration. Confocal bulk acoustic wave resonators have demonstrated an immense potential to support long-lived phonon modes in crystalline media at cryogenic temperatures. So far, these devices have been macroscopic with cm-scale dimensions. However, as we push these oscillators to high frequencies, we have an opportunity to radically reduce the footprint as a basis for classical and emerging quantum technologies. In this paper, we present novel design principles and simple microfabrication techniques to create high performance chip-scale confocal bulk acoustic wave resonators in a wide array of crystalline materials. We tailor the acoustic modes of such resonators to efficiently couple to light, permitting us to perform a non-invasive laser-based phonon spectroscopy. Using this technique, we demonstrate an acoustic Q-factor of 2.8 × 107 (6.5 × 106) for chip-scale resonators operating at 12.7 GHz (37.8 GHz) in crystalline z-cut quartz (x-cut silicon) at cryogenic temperatures.

  18. A premier analysis of supersymmetric closed string tachyon cosmology

    NASA Astrophysics Data System (ADS)

    Vázquez-Báez, V.; Ramírez, C.

    2018-04-01

    From a previously found worldline supersymmetric formulation for the effective action of the closed string tachyon in a FRW background, the Hamiltonian of the theory is constructed, by means of the Dirac procedure, and written in a quantum version. Using the supersymmetry algebra we are able to find solutions to the Wheeler-DeWitt equation via a more simple set of first order differential equations. Finally, for the k = 0 case, we compute the expectation value of the scale factor with a suitably potential also favored in the present literature. We give some interpretations of the results and state future work lines on this matter.

  19. In-place recalibration technique applied to a capacitance-type system for measuring rotor blade tip clearance

    NASA Technical Reports Server (NTRS)

    Barranger, J. P.

    1978-01-01

    The rotor blade tip clearance measurement system consists of a capacitance sensing probe with self contained tuning elements, a connecting coaxial cable, and remotely located electronics. Tests show that the accuracy of the system suffers from a strong dependence on probe tip temperature and humidity. A novel inplace recalibration technique was presented which partly overcomes this problem through a simple modification of the electronics that permits a scale factor correction. This technique, when applied to a commercial system significantly reduced errors under varying conditions of humidity and temperature. Equations were also found that characterize the important cable and probe design quantities.

  20. Towards a model of pion generalized parton distributions from Dyson-Schwinger equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moutarde, H.

    2015-04-10

    We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.

  1. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  2. Magnetometer bias determination and attitude determination for near-earth spacecraft

    NASA Technical Reports Server (NTRS)

    Lerner, G. M.; Shuster, M. D.

    1979-01-01

    A simple linear-regression algorithm is used to determine simultaneously magnetometer biases, misalignments, and scale factor corrections, as well as the dependence of the measured magnetic field on magnetic control systems. This algorithm has been applied to data from the Seasat-1 and the Atmosphere Explorer Mission-1/Heat Capacity Mapping Mission (AEM-1/HCMM) spacecraft. Results show that complete inflight calibration as described here can improve significantly the accuracy of attitude solutions obtained from magnetometer measurements. This report discusses the difficulties involved in obtaining attitude information from three-axis magnetometers, briefly derives the calibration algorithm, and presents numerical results for the Seasat-1 and AEM-1/HCMM spacecraft.

  3. Risk perceptions of arsenic in tap water and consumption of bottled water

    NASA Astrophysics Data System (ADS)

    Jakus, Paul M.; Shaw, W. Douglass; Nguyen, To N.; Walker, Mark

    2009-05-01

    The demand for bottled water has increased rapidly over the past decade, but bottled water is extremely costly compared to tap water. The convenience of bottled water surely matters to consumers, but are others factors at work? This manuscript examines whether purchases of bottled water are associated with the perceived risk of tap water. All of the past studies on bottled water consumption have used simple scale measures of perceived risk that do not correspond to risk measures used by risk analysts. We elicit a probability-based measure of risk and find that as perceived risks rise, expenditures for bottled water rise.

  4. Competing Thermodynamic and Dynamic Factors Select Molecular Assemblies on a Gold Surface

    NASA Astrophysics Data System (ADS)

    Haxton, Thomas K.; Zhou, Hui; Tamblyn, Isaac; Eom, Daejin; Hu, Zonghai; Neaton, Jeffrey B.; Heinz, Tony F.; Whitelam, Stephen

    2013-12-01

    Controlling the self-assembly of surface-adsorbed molecules into nanostructures requires understanding physical mechanisms that act across multiple length and time scales. By combining scanning tunneling microscopy with hierarchical ab initio and statistical mechanical modeling of 1,4-substituted benzenediamine (BDA) molecules adsorbed on a gold (111) surface, we demonstrate that apparently simple nanostructures are selected by a subtle competition of thermodynamics and dynamics. Of the collection of possible BDA nanostructures mechanically stabilized by hydrogen bonding, the interplay of intermolecular forces, surface modulation, and assembly dynamics select at low temperature a particular subset: low free energy oriented linear chains of monomers and high free energy branched chains.

  5. Pyrotechnic modeling for the NSI and pin puller

    NASA Technical Reports Server (NTRS)

    Powers, Joseph M.; Gonthier, Keith A.

    1993-01-01

    A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.

  6. THE DEPENDENCE OF STELLAR MASS AND ANGULAR MOMENTUM LOSSES ON LATITUDE AND THE INTERACTION OF ACTIVE REGION AND DIPOLAR MAGNETIC FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garraffo, Cecilia; Drake, Jeremy J.; Cohen, Ofer

    Rotation evolution of late-type stars is dominated by magnetic braking and the underlying factors that control this angular momentum loss are important for the study of stellar spin-down. In this work, we study angular momentum loss as a function of two different aspects of magnetic activity using a calibrated Alfvén wave-driven magnetohydrodynamic wind model: the strengths of magnetic spots and their distribution in latitude. By driving the model using solar and modified solar surface magnetograms, we show that the topology of the field arising from the net interaction of both small-scale and large-scale field is important for spin-down rates andmore » that angular momentum loss is not a simple function of large scale magnetic field strength. We find that changing the latitude of magnetic spots can modify mass and angular momentum loss rates by a factor of two. The general effect that causes these differences is the closing down of large-scale open field at mid- and high-latitudes by the addition of the small-scale field. These effects might give rise to modulation of mass and angular momentum loss through stellar cycles, and present a problem for ab initio attempts to predict stellar spin-down based on wind models. For all the magnetogram cases considered here, from dipoles to various spotted distributions, we find that angular momentum loss is dominated by the mass loss at mid-latitudes. The spin-down torque applied by magnetized winds therefore acts at specific latitudes and is not evenly distributed over the stellar surface, though this aspect is unlikely to be important for understanding spin-down and surface flows on stars.« less

  7. A clinimetric approach to assessing quality of life in epilepsy.

    PubMed

    Cramer, J A

    1993-01-01

    Clinimetrics is a concept involving the use of rating scales for clinical phenomena ranging from physical examinations to functional performance. Clinimetric or rating scales can be used for defining patient status and changes that occur during long-term observation. The scores derived from such scales can be used as guidelines for intervention, treatment, or prediction of outcome. In epilepsy, clinimetric scales have been developed for assessing seizure frequency, seizure severity, adverse effects related to antiepileptic drugs (AEDs), and quality of life after surgery for epilepsy. The VA Epilepsy Cooperative Study seizure rating scale combines frequency and severity in a weighted scoring system for simple and complex partial and generalized tonic-clonic seizures, summing all items in a total seizure score. Similarly, the rating scales for systemic toxicity and neurotoxicity use scores weighted for severity for assessing specific adverse effects typically related to AEDs. A composite score, obtained by adding the scores for seizures, systemic toxicity, and neurotoxicity, represents the overall status of the patient at a given time. The Chalfont Seizure Severity Scale also applies scores relative to the impact of a given item on the patient, without factoring in seizure frequency. The Liverpool Seizure Severity Scale is a patient questionnaire covering perceived seizure severity and the impact of ictal and postictal events. The UCLA Epilepsy Surgery Inventory (ESI-55) assesses quality of life for patients who have undergone surgery for epilepsy using generic health status instruments with additional epilepsy-specific items.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Beyond greening and browning: the need for an integrated understanding of Arctic change

    NASA Astrophysics Data System (ADS)

    Gamon, J. A.; Huemmrich, K. F.; Hmimina, G.; Yu, R.

    2017-12-01

    Satellite records and field observations povide contradictory evidence for "greening" or "browning" of Arctic tundra. Large-scale observations of apparent greening have been based on satellite vegetation indices (e.g NDVI). However, a clear interpretation of these trends are confounded by changing snow cover and surface hydrology, both of which influence NDVI and are known to be changing independently of any direct vegetation response. Field studies have demonstrated greening in some areas, but not others, and have also documented changing permafrost depth, surface hydrology and snow cover. Together, these confounding factors can explain some of the contradictory evidence based regarding greening and browning. Given the multiple influences on Arctic NDVI, simple conclusions regarding greening and browning from satellite data alone can be incorrect; when these confounding factors are taken into account, some areas that show apparent greening in the satellite record appear to be undergoing productivity declines due to surface drying. These contradictory interpretations have profound implications for our understanding of changing surface energy balance, biogeochemistry, and surface-atmosphere feedbacks. To better address Arctic ecosystem responses to a changing climate, an integrated, multi-scale, multivariate approach that considers hydrology, permafrost, snow cover and vegetation is needed.

  9. Pharmacokinetics and effects on serum cholinesterase activities of organophosphorus pesticides acephate and chlorpyrifos in chimeric mice transplanted with human hepatocytes.

    PubMed

    Suemizu, Hiroshi; Sota, Shigeto; Kuronuma, Miyuki; Shimizu, Makiko; Yamazaki, Hiroshi

    2014-11-01

    Organophosphorus pesticides acephate and chlorpyrifos in foods have potential to impact human health. The aim of the current study was to investigate the pharmacokinetics of acephate and chlorpyrifos orally administered at lowest-observed-adverse-effect-level doses in chimeric mice transplanted with human hepatocytes. Absorbed acephate and its metabolite methamidophos were detected in serum from wild type mice and chimeric mice orally administered 150mg/kg. Approximately 70% inhibition of cholinesterase was evident in plasma of chimeric mice with humanized liver (which have higher serum cholinesterase activities than wild type mice) 1day after oral administrations of acephate. Adjusted animal biomonitoring equivalents from chimeric mice studies were scaled to human biomonitoring equivalents using known species allometric scaling factors and in vitro metabolic clearance data with a simple physiologically based pharmacokinetic (PBPK) model. Estimated plasma concentrations of acephate and chlorpyrifos in humans were consistent with reported concentrations. Acephate cleared similarly in humans and chimeric mice but accidental/incidental overdose levels of chlorpyrifos cleared (dependent on liver metabolism) more slowly from plasma in humans than it did in mice. The data presented here illustrate how chimeric mice transplanted with human hepatocytes in combination with a simple PBPK model can assist evaluations of toxicological potential of organophosphorus pesticides. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Simple processes drive unpredictable differences in estuarine fish assemblages: Baselines for understanding site-specific ecological and anthropogenic impacts

    NASA Astrophysics Data System (ADS)

    Sheaves, Marcus

    2016-03-01

    Predicting patterns of abundance and composition of biotic assemblages is essential to our understanding of key ecological processes, and our ability to monitor, evaluate and manage assemblages and ecosystems. Fish assemblages often vary from estuary to estuary in apparently unpredictable ways, making it challenging to develop a general understanding of the processes that determine assemblage composition. This makes it problematic to transfer understanding from one estuary situation to another and therefore difficult to assemble effective management plans or to assess the impacts of natural and anthropogenic disturbance. Although system-to-system variability is a common property of ecological systems, rather than being random it is the product of complex interactions of multiple causes and effects at a variety of spatial and temporal scales. I investigate the drivers of differences in estuary fish assemblages, to develop a simple model explaining the diversity and complexity of observed estuary-to-estuary differences, and explore its implications for management and conservation. The model attributes apparently unpredictable differences in fish assemblage composition from estuary to estuary to the interaction of species-specific, life history-specific and scale-specific processes. In explaining innate faunal differences among estuaries without the need to invoke complex ecological or anthropogenic drivers, the model provides a baseline against which the effects of additional natural and anthropogenic factors can be evaluated.

  11. Spectroscopic factors in the N =20 island of inversion: The Nilsson strong-coupling limit

    NASA Astrophysics Data System (ADS)

    Macchiavelli, A. O.; Crawford, H. L.; Campbell, C. M.; Clark, R. M.; Cromaz, M.; Fallon, P.; Jones, M. D.; Lee, I. Y.; Richard, A. L.; Salathe, M.

    2017-11-01

    Spectroscopic factors, extracted from one-neutron knockout and Coulomb dissociation reactions, for transitions from the ground state of 33Mg to the ground-state rotational band in 32Mg, and from 32Mg to low-lying negative-parity states in 31Mg, are interpreted within the rotational model. Associating the ground state of 33Mg and the negative-parity states in 31Mg with the 3/2 [321 ] Nilsson level, the strong coupling limit gives simple expressions that relate the amplitudes (Cj ℓ) of this wave function with the measured cross sections and derived spectroscopic factors (Sj ℓ). To obtain a consistent agreement with the data within this framework, we find that one requires a modified 3/2 [321 ] wave function with an increased contribution from the spherical 2 p3 /2 orbit as compared to a standard Nilsson calculation. This is consistent with the findings of large-scale shell model calculations and can be traced to weak binding effects that lower the energy of low-ℓ orbitals.

  12. On the nonlinearity of spatial scales in extreme weather attribution statements

    NASA Astrophysics Data System (ADS)

    Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V.; Wehner, Michael; Shiogama, Hideo; Wolski, Piotr; Ciavarella, Andrew; Christidis, Nikolaos

    2018-04-01

    In the context of ongoing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporal scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.

  13. On the nonlinearity of spatial scales in extreme weather attribution statements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah

    In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less

  14. On the nonlinearity of spatial scales in extreme weather attribution statements

    DOE PAGES

    Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; ...

    2017-06-17

    In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less

  15. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    PubMed

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  16. Constraining the Turbulence Scale and Mixing of a Crushed Pulsar Wind Nebula

    NASA Astrophysics Data System (ADS)

    Ng, Chi Yung; Ma, Y. K.; Bucciantini, Niccolo; Slane, Patrick O.; Gaensler, Bryan M.; Temim, Tea

    2016-04-01

    Pulsar wind nebulae (PWNe) are synchrotron-emitting nebulae resulting from the interaction between pulsars' relativistic particle outflows and the ambient medium. The Snail PWN in supernova remnant G327.1-1.1 is a rare system that has recently been crushed by supernova reverse shock. We carried out radio polarization observations with the Australia Telescope Compact Array and found highly ordered magnetic field structure in the nebula. This result is surprising, given the turbulent environment expected from hydrodynamical simulations. We developed a toymodel and compared simple simulations with observations to constrain the characteristic turbulence scale in the PWN and the mixing with supernova ejecta. We estimate that the turbulence scale is about one-eighth to one-sixth of the nebula radius and a pulsar wind filling factor of 50-75%. The latter implies substantial mixing of the pulsar wind with the surrounding supernova ejecta.This work is supported by an ECS grant of the Hong Kong Government under HKU 709713P. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.

  17. On the penetration of a hot diapir through a strongly temperature-dependent viscosity medium

    NASA Technical Reports Server (NTRS)

    Daly, S. F.; Raefsky, A.

    1985-01-01

    The ascent of a hot spherical body through a fluid with a strongly temperature-dependent viscosity has been studied using an axisymmetric finite element method. Numerical solutions range over Peclet numbers of 0.1 - 1000 from constant viscosity up to viscosity variations of 100,000. Both rigid and stress-free boundary conditions were applied at the surface of the sphere. The dependence of drag on viscosity variation was shown to have no dependence on the stress boundary condition except for a Stokes flow scaling factor. A Nusselt number parameterization based on the stress-free constant viscosity functional dependence on the Peclet number scaled by a parameter depending on the viscosity structure fits both stress-free and rigid boundary condition data above viscosity variations of 100. The temperature scale height was determined as a function of sphere radius. For the simple physical model studied in this paper pre-heating is required to reduce the ambient viscosity of the country rock to less than 10 to the 22nd sq cm/s in order for a 10 km diapir to penetrate a distance of several radii.

  18. Scaling relations in mountain streams: colluvial and Quaternary controls

    NASA Astrophysics Data System (ADS)

    Brardinoni, Francesco; Hassan, Marwan; Church, Michael

    2010-05-01

    In coastal British Columbia, Canada, the glacial palimpsest profoundly affects the geomorphic structure of mountain drainage basins. In this context, by combining remotely sensed, field- and GIS-based data, we examine the scaling behavior of bankfull width and depth with contributing area in a process-based framework. We propose a novel approach that, by detailing interactions between colluvial and fluvial processes, provides new insights on the geomorphic functioning of mountain channels. This approach evaluates the controls exerted by a parsimonious set of governing factors on channel size. Results indicate that systematic deviations from simple power-law trends in bankfull width and depth are common. Deviations are modulated by interactions between the inherited glacial and paraglacial topography (imposed slope), coarse grain-size fraction, and chiefly the rate of colluvial sediment delivery to streams. Cumulatively, departures produce distal cross-sections that are typically narrower and shallower than expected. These outcomes, while reinforcing the notion that mountain drainage basins in formerly glaciated systems are out of balance with current environmental conditions, show that cross-sectional scaling relations are useful metrics for understanding colluvial-alluvial interactions.

  19. Toward industrial scale synthesis of ultrapure singlet nanoparticles with controllable sizes in a continuous gas-phase process

    NASA Astrophysics Data System (ADS)

    Feng, Jicheng; Biskos, George; Schmidt-Ott, Andreas

    2015-10-01

    Continuous gas-phase synthesis of nanoparticles is associated with rapid agglomeration, which can be a limiting factor for numerous applications. In this report, we challenge this paradigm by providing experimental evidence to support that gas-phase methods can be used to produce ultrapure non-agglomerated “singlet” nanoparticles having tunable sizes at room temperature. By controlling the temperature in the particle growth zone to guarantee complete coalescence of colliding entities, the size of singlets in principle can be regulated from that of single atoms to any desired value. We assess our results in the context of a simple analytical model to explore the dependence of singlet size on the operating conditions. Agreement of the model with experimental measurements shows that these methods can be effectively used for producing singlets that can be processed further by many alternative approaches. Combined with the capabilities of up-scaling and unlimited mixing that spark ablation enables, this study provides an easy-to-use concept for producing the key building blocks for low-cost industrial-scale nanofabrication of advanced materials.

  20. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  1. How Darcy's equation is linked to the linear reservoir at catchment scale

    NASA Astrophysics Data System (ADS)

    Savenije, Hubert H. G.

    2017-04-01

    In groundwater hydrology two simple linear equations exist that describe the relation between groundwater flow and the gradient that drives it: Darcy's equation and the linear reservoir. Both equations are empirical at heart: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they show similarity, without having detailed knowledge of the structure of the underlying aquifers it is not trivial to upscale Darcy's equation to the watershed scale. In this paper, a relatively simple connection is provided between the two, based on the assumption that the groundwater system is organized by an efficient drainage network, a mostly invisible pattern that has evolved over geological time scales. This drainage network provides equally distributed resistance to flow along the streamlines that connect the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance.

  2. Effects of fear factors in disease propagation

    NASA Astrophysics Data System (ADS)

    Wang, Yubo; Xiao, Gaoxi; Wong, Limsoon; Fu, Xiuju; Ma, Stefan; Hiang Cheng, Tee

    2011-09-01

    Upon an outbreak of a dangerous infectious disease, people generally tend to reduce their contacts with others in fear of getting infected. Such typical actions apparently help slow down the spreading of infection. Thanks to today's broad public media coverage, the fear factor may even contribute to preventing an outbreak from happening. We are motivated to study such effects by adopting a complex network approach. First we evaluate the simple case where connections between individuals are randomly removed due to the fear factor. Then we consider a different case where each individual keeps at least a few connections after contact reduction. Such a case is arguably more realistic since people may choose to keep a few social contacts, e.g., with their family members and closest friends, at any cost. Finally, a study is conducted on the case where connection removals are carried out dynamically while the infection is spreading out. Analytical and simulation results show that the fear factor may not easily prevent an epidemic outbreak from happening in scale-free networks. However, it significantly reduces the fraction of the nodes ever getting infected during the outbreak.

  3. Disease risk curves.

    PubMed

    Hughes, G; Burnett, F J; Havis, N D

    2013-11-01

    Disease risk curves are simple graphical relationships between the probability of need for treatment and evidence related to risk factors. In the context of the present article, our focus is on factors related to the occurrence of disease in crops. Risk is the probability of adverse consequences; specifically in the present context it denotes the chance that disease will reach a threshold level at which crop protection measures can be justified. This article describes disease risk curves that arise when risk is modeled as a function of more than one risk factor, and when risk is modeled as a function of a single factor (specifically the level of disease at an early disease assessment). In both cases, disease risk curves serve as calibration curves that allow the accumulated evidence related to risk to be expressed on a probability scale. When risk is modeled as a function of the level of disease at an early disease assessment, the resulting disease risk curve provides a crop loss assessment model in which the downside is denominated in terms of risk rather than in terms of yield loss.

  4. Early flight test experience with Cockpit Displayed Traffic Information (CDTI)

    NASA Technical Reports Server (NTRS)

    Abbott, T. S.; Moen, G. C.; Person, L. H., Jr.; Keyser, G. L., Jr.; Yenni, K. R.; Garren, J. F., Jr.

    1980-01-01

    Coded symbology, based on the results of early human factors studies, was displayed on the electronic horizontal situation indicator and flight tested on an advanced research aircraft in order to subject the coded traffic symbology to a realistic flight environment and to assess its value by means of a direct comparison with simple, uncoded traffic symbology. The tests consisted of 28 curved, decelerating approaches, flown by research-pilot flight crews. The traffic scenarios involved both conflict-free and blunder situations. Subjective pilot commentary was obtained through the use of a questionnaire and extensive pilot debriefing sessions. The results of these debriefing sessions group conveniently under either of two categories: display factors or task performance. A major item under the display factor category was the problem of display clutter. The primary contributors to clutter were the use of large map-scale factors, the use of traffic data blocks, and the presentation of more than a few aircraft. In terms of task performance, the cockpit displayed traffic information was found to provide excellent overall situation awareness.

  5. Early working memory as a racially and ethnically neutral measure of outcome in extremely preterm children at 18-22 months

    PubMed Central

    Lowe, Jean R.; Duncan, Andrea Freeman; Bann, Carla M.; Fuller, Janell; Hintz, Susan R.; Das, Abhik; Higgins, Rosemary D.; Watterberg, Kristi L.

    2013-01-01

    Background Difficulties with executive function has been found in preterm children, resulting in difficulties with learning and school performance. Aim This study evaluated the relationship of early working memory as measured by object permanence items to the cognitive and language scores on the Bayley Scales-III in a cohort of children born extremely preterm. Study Design Logistic regression models were conducted to compare object permanence scores derived from the Bayley Scales-III by race/ethnicity and maternal education, controlling for medical covariates. Subjects Extremely preterm toddlers (526), who were part of a Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network's multi-center study, were evaluated at 18-22 months corrected age. Outcome Measures Object permanence scores derived from the Bayley Developmental Scales were compared by race/ethnicity and maternal education, controlling for medical covariates. Results There were no significant differences in object permanence mastery and scores among the treatment groups after controlling for medical and social variables, including maternal education and race/ethnicity. Males and children with intraventricular hemorrhage, retinopathy of prematurity, and bronchopulmonary dysplasia were less likely to demonstrate object permanence mastery and had lower object permanence scores. Children who attained object permanence mastery had significantly higher Bayley Scales-III cognitive and language scores after controlling for medical and socio-economic factors. Conclusions Our measure of object permanence is free of influence from race, ethnic and socio-economic factors. Adding this simple task to current clinical practice could help detect early executive function difficulties in young children. PMID:23993309

  6. Early working memory as a racially and ethnically neutral measure of outcome in extremely preterm children at 18-22 months.

    PubMed

    Lowe, Jean R; Duncan, Andrea Freeman; Bann, Carla M; Fuller, Janell; Hintz, Susan R; Das, Abhik; Higgins, Rosemary D; Watterberg, Kristi L

    2013-12-01

    Difficulties with executive function have been found in preterm children, resulting in difficulties with learning and school performance. This study evaluated the relationship of early working memory as measured by object permanence items to the cognitive and language scores on the Bayley Scales-III in a cohort of children born extremely preterm. Logistic regression models were conducted to compare object permanence scores derived from the Bayley Scales-III by race/ethnicity and maternal education, controlling for medical covariates. Extremely preterm toddlers (526), who were part of a Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network's multi-center study, were evaluated at 18-22 months corrected age. Object permanence scores derived from the Bayley Developmental Scales were compared by race/ethnicity and maternal education, controlling for medical covariates. There were no significant differences in object permanence mastery and scores among the treatment groups after controlling for medical and social variables, including maternal education and race/ethnicity. Males and children with intraventricular hemorrhage, retinopathy of prematurity, and bronchopulmonary dysplasia were less likely to demonstrate object permanence mastery and had lower object permanence scores. Children who attained object permanence mastery had significantly higher Bayley Scales-III cognitive and language scores after controlling for medical and socio-economic factors. Our measure of object permanence is free of influence from race, ethnic and socio-economic factors. Adding this simple task to current clinical practice could help detect early executive function difficulties in young children. Published by Elsevier Ireland Ltd.

  7. D-Light on promoters: a client-server system for the analysis and visualization of cis-regulatory elements

    PubMed Central

    2013-01-01

    Background The binding of transcription factors to DNA plays an essential role in the regulation of gene expression. Numerous experiments elucidated binding sequences which subsequently have been used to derive statistical models for predicting potential transcription factor binding sites (TFBS). The rapidly increasing number of genome sequence data requires sophisticated computational approaches to manage and query experimental and predicted TFBS data in the context of other epigenetic factors and across different organisms. Results We have developed D-Light, a novel client-server software package to store and query large amounts of TFBS data for any number of genomes. Users can add small-scale data to the server database and query them in a large scale, genome-wide promoter context. The client is implemented in Java and provides simple graphical user interfaces and data visualization. Here we also performed a statistical analysis showing what a user can expect for certain parameter settings and we illustrate the usage of D-Light with the help of a microarray data set. Conclusions D-Light is an easy to use software tool to integrate, store and query annotation data for promoters. A public D-Light server, the client and server software for local installation and the source code under GNU GPL license are available at http://biwww.che.sbg.ac.at/dlight. PMID:23617301

  8. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    PubMed

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  9. Allometric scaling law in a simple oxygen exchanging network: possible implications on the biological allometric scaling laws.

    PubMed

    Santillán, Moisés

    2003-07-21

    A simple model of an oxygen exchanging network is presented and studied. This network's task is to transfer a given oxygen rate from a source to an oxygen consuming system. It consists of a pipeline, that interconnects the oxygen consuming system and the reservoir and of a fluid, the active oxygen transporting element, moving through the pipeline. The network optimal design (total pipeline surface) and dynamics (volumetric flow of the oxygen transporting fluid), which minimize the energy rate expended in moving the fluid, are calculated in terms of the oxygen exchange rate, the pipeline length, and the pipeline cross-section. After the oxygen exchanging network is optimized, the energy converting system is shown to satisfy a 3/4-like allometric scaling law, based upon the assumption that its performance regime is scale invariant as well as on some feasible geometric scaling assumptions. Finally, the possible implications of this result on the allometric scaling properties observed elsewhere in living beings are discussed.

  10. Simplifying the use of prognostic information in traumatic brain injury. Part 1: The GCS-Pupils score: an extended index of clinical severity.

    PubMed

    Brennan, Paul M; Murray, Gordon D; Teasdale, Graham M

    2018-06-01

    OBJECTIVE Glasgow Coma Scale (GCS) scores and pupil responses are key indicators of the severity of traumatic brain damage. The aim of this study was to determine what information would be gained by combining these indicators into a single index and to explore the merits of different ways of achieving this. METHODS Information about early GCS scores, pupil responses, late outcomes on the Glasgow Outcome Scale, and mortality were obtained at the individual patient level by reviewing data from the CRASH (Corticosteroid Randomisation After Significant Head Injury; n = 9,045) study and the IMPACT (International Mission for Prognosis and Clinical Trials in TBI; n = 6855) database. These data were combined into a pooled data set for the main analysis. Methods of combining the Glasgow Coma Scale and pupil response data varied in complexity from using a simple arithmetic score (GCS score [range 3-15] minus the number of nonreacting pupils [0, 1, or 2]), which we call the GCS-Pupils score (GCS-P; range 1-15), to treating each factor as a separate categorical variable. The content of information about patient outcome in each of these models was evaluated using Nagelkerke's R 2 . RESULTS Separately, the GCS score and pupil response were each related to outcome. Adding information about the pupil response to the GCS score increased the information yield. The performance of the simple GCS-P was similar to the performance of more complex methods of evaluating traumatic brain damage. The relationship between decreases in the GCS-P and deteriorating outcome was seen across the complete range of possible scores. The additional 2 lowest points offered by the GCS-Pupils scale (GCS-P 1 and 2) extended the information about injury severity from a mortality rate of 51% and an unfavorable outcome rate of 70% at GCS score 3 to a mortality rate of 74% and an unfavorable outcome rate of 90% at GCS-P 1. The paradoxical finding that GCS score 4 was associated with a worse outcome than GCS score 3 was not seen when using the GCS-P. CONCLUSIONS A simple arithmetic combination of the GCS score and pupillary response, the GCS-P, extends the information provided about patient outcome to an extent comparable to that obtained using more complex methods. The greater range of injury severities that are identified and the smoothness of the stepwise pattern of outcomes across the range of scores may be useful in evaluating individual patients and identifying patient subgroups. The GCS-P may be a useful platform onto which information about other key prognostic features can be added in a simple format likely to be useful in clinical practice.

  11. Small-Scale and Low Cost Electrodes for "Standard" Reduction Potential Measurements

    ERIC Educational Resources Information Center

    Eggen, Per-Odd; Kvittingen, Lise

    2007-01-01

    The construction of three simple and inexpensive electrodes, hydrogen, and chlorine and copper electrode is described. This simple method will encourage students to construct their own electrode and better help in understanding precipitation and other electrochemistry concepts.

  12. Flight Research into Simple Adaptive Control on the NASA FAST Aircraft

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.

    2011-01-01

    A series of simple adaptive controllers with varying levels of complexity were designed, implemented and flight tested on the NASA Full-Scale Advanced Systems Testbed (FAST) aircraft. Lessons learned from the development and flight testing are presented.

  13. Understanding relationships among ecosystem services across spatial scales and over time

    NASA Astrophysics Data System (ADS)

    Qiu, Jiangxiao; Carpenter, Stephen R.; Booth, Eric G.; Motew, Melissa; Zipper, Samuel C.; Kucharik, Christopher J.; Loheide, Steven P., II; Turner, Monica G.

    2018-05-01

    Sustaining ecosystem services (ES), mitigating their tradeoffs and avoiding unfavorable future trajectories are pressing social-environmental challenges that require enhanced understanding of their relationships across scales. Current knowledge of ES relationships is often constrained to one spatial scale or one snapshot in time. In this research, we integrated biophysical modeling with future scenarios to examine changes in relationships among eight ES indicators from 2001–2070 across three spatial scales—grid cell, subwatershed, and watershed. We focused on the Yahara Watershed (Wisconsin) in the Midwestern United States—an exemplar for many urbanizing agricultural landscapes. Relationships among ES indicators changed over time; some relationships exhibited high interannual variations (e.g. drainage vs. food production, nitrate leaching vs. net ecosystem exchange) and even reversed signs over time (e.g. perennial grass production vs. phosphorus yield). Robust patterns were detected for relationships among some regulating services (e.g. soil retention vs. water quality) across three spatial scales, but other relationships lacked simple scaling rules. This was especially true for relationships of food production vs. water quality, and drainage vs. number of days with runoff >10 mm, which differed substantially across spatial scales. Our results also showed that local tradeoffs between food production and water quality do not necessarily scale up, so reducing local tradeoffs may be insufficient to mitigate such tradeoffs at the watershed scale. We further synthesized these cross-scale patterns into a typology of factors that could drive changes in ES relationships across scales: (1) effects of biophysical connections, (2) effects of dominant drivers, (3) combined effects of biophysical linkages and dominant drivers, and (4) artificial scale effects, and concluded with management implications. Our study highlights the importance of taking a dynamic perspective and accounting for spatial scales in monitoring and management to sustain future ES.

  14. Self-Organized Criticality and Scaling in Lifetime of Traffic Jams

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-01-01

    The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime of traffic jams scales as \\cong Lν with ν=0.65±0.04. It is shown that the cumulative distribution Nm (L) of lifetimes satisfies the finite-size scaling form Nm (L) \\cong L-1 f(m/Lν).

  15. Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2006-04-01

    We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.

  16. Construction and validation of a measure of integrative well-being in seven languages: the Pemberton Happiness Index.

    PubMed

    Hervás, Gonzalo; Vázquez, Carmelo

    2013-04-22

    We introduce the Pemberton Happiness Index (PHI), a new integrative measure of well-being in seven languages, detailing the validation process and presenting psychometric data. The scale includes eleven items related to different domains of remembered well-being (general, hedonic, eudaimonic, and social well-being) and ten items related to experienced well-being (i.e., positive and negative emotional events that possibly happened the day before); the sum of these items produces a combined well-being index. A distinctive characteristic of this study is that to construct the scale, an initial pool of items, covering the remembered and experienced well-being domains, were subjected to a complete selection and validation process. These items were based on widely used scales (e.g., PANAS, Satisfaction With Life Scale, Subjective Happiness Scale, and Psychological Well-Being Scales). Both the initial items and reference scales were translated into seven languages and completed via Internet by participants (N = 4,052) aged 16 to 60 years from nine countries (Germany, India, Japan, Mexico, Russia, Spain, Sweden, Turkey, and USA). Results from this initial validation study provided very good support for the psychometric properties of the PHI (i.e., internal consistency, a single-factor structure, and convergent and incremental validity). Given the PHI's good psychometric properties, this simple and integrative index could be used as an instrument to monitor changes in well-being. We discuss the utility of this integrative index to explore well-being in individuals and communities.

  17. Enhanced sensitivity in a butterfly gyroscope with a hexagonal oblique beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Dingbang; Cao, Shijie; Hou, Zhanqiang, E-mail: houzhanqiang@nudt.edu.cn

    2015-04-15

    A new approach to improve the performance of a butterfly gyroscope is developed. The methodology provides a simple way to improve the gyroscope’s sensitivity and stability, by reducing the resonant frequency mismatch between the drive and sense modes. This method was verified by simulations and theoretical analysis. The size of the hexagonal section oblique beam is the major factor that influences the resonant frequency mismatch. A prototype, which has the appropriately sized oblique beam, was fabricated using precise, time-controlled multilayer pre-buried masks. The performance of this prototype was compared with a non-tuned gyroscope. The scale factor of the prototype reachesmore » 30.13 mV/ °/s, which is 15 times larger than that obtained from the non-tuned gyroscope. The bias stability of the prototype is 0.8 °/h, which is better than the 5.2 °/h of the non-tuned devices.« less

  18. Transverse momentum in double parton scattering: factorisation, evolution and matching

    NASA Astrophysics Data System (ADS)

    Buffing, Maarten G. A.; Diehl, Markus; Kasemets, Tomas

    2018-01-01

    We give a description of double parton scattering with measured transverse momenta in the final state, extending the formalism for factorisation and resummation developed by Collins, Soper and Sterman for the production of colourless particles. After a detailed analysis of their colour structure, we derive and solve evolution equations in rapidity and renormalisation scale for the relevant soft factors and double parton distributions. We show how in the perturbative regime, transverse momentum dependent double parton distributions can be expressed in terms of simpler nonperturbative quantities and compute several of the corresponding perturbative kernels at one-loop accuracy. We then show how the coherent sum of single and double parton scattering can be simplified for perturbatively large transverse momenta, and we discuss to which order resummation can be performed with presently available results. As an auxiliary result, we derive a simple form for the square root factor in the Collins construction of transverse momentum dependent parton distributions.

  19. Estimating evapotranspiration in natural and constructed wetlands

    USGS Publications Warehouse

    Lott, R. Brandon; Hunt, Randall J.

    2001-01-01

    Difficulties in accurately calculating evapotranspiration (ET) in wetlands can lead to inaccurate water balances—information important for many compensatory mitigation projects. Simple meteorological methods or off-site ET data often are used to estimate ET, but these approaches do not include potentially important site-specific factors such as plant community, root-zone water levels, and soil properties. The objective of this study was to compare a commonly used meterological estimate of potential evapotranspiration (PET) with direct measurements of ET (lysimeters and water-table fluctuations) and small-scale root-zone geochemistry in a natural and constructed wetland system. Unlike what has been commonly noted, the results of the study demonstrated that the commonly used Penman combination method of estimating PET underestimated the ET that was measured directly in the natural wetland over most of the growing season. This result is likely due to surface heterogeneity and related roughness efffects not included in the simple PET estimate. The meterological method more closely approximated season-long measured ET rates in the constructed wetland but may overestimate the ET rate late in the growing season. ET rates also were temporally variable in wetlands over a range of time scales because they can be influenced by the relation of the water table to the root zone and the timing of plant senescence. Small-scale geochemical sampling of the shallow root zone was able to provide an independent evaluation of ET rates, supporting the identification of higher ET rates in the natural wetlands and differences in temporal ET rates due to the timing of senescence. These discrepancies illustrate potential problems with extrapolating off-site estimates of ET or single measurements of ET from a site over space or time.

  20. A simple index of stand density for Douglas-fir.

    Treesearch

    R.O. Curtis

    1982-01-01

    The expression RD = G/(Dg½), where G is basal area and Dg is quadratic mean stand diameter, provides a simple and convenient scale of relative stand density for Douglas-fir, equivalent to other generally accepted diameter-based stand density measures.

  1. The Effectiveness of the Component Impact Test Method for the Side Impact Injury Assessment of the Door Trim

    NASA Astrophysics Data System (ADS)

    Youn, Younghan; Koo, Jeong-Seo

    The complete evaluation of the side vehicle structure and the occupant protection is only possible by means of the full scale side impact crash test. But, auto part manufacturers such as door trim makers can not conduct the test especially when the vehicle is under the developing process. The main objective of this study is to obtain the design guidelines by a simple component level impact test. The relationship between the target absorption energy and impactor speed were examined using the energy absorbed by the door trim. Since each different vehicle type required different energy levels on the door trim. A simple impact test method was developed to estimate abdominal injury by measuring reaction force of the impactor. The reaction force will be converted to a certain level of the energy by the proposed formula. The target of absorption energy for door trim only and the impact speed of simple impactor are derived theoretically based on the conservation of energy. With calculated speed of dummy and the effective mass of abdomen, the energy allocated in the abdomen area of door trim was calculated. The impactor speed can be calculated based on the equivalent energy of door trim absorbed during the full crash test. With the proposed design procedure for the door trim by a simple impact test method was demonstrated to evaluate the abdominal injury. This paper describes a study that was conducted to determine sensitivity of several design factors for reducing abdominal injury values using the matrix of orthogonal array method. In conclusion, with theoretical considerations and empirical test data, the main objective, standardization of door trim design using the simple impact test method was established.

  2. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  3. Factors associated with stress among adolescents in the city of Nawabshah, Pakistan.

    PubMed

    Parpio, Yasmin; Farooq, Salima; Gulzar, Saleema; Tharani, Ambreen; Javed, Fawad; Ali, Tazeen Saeed

    2012-11-01

    To identify the risk factors of stress among school-going adolescents in rural Nawabshah, Pakistan. The cross-sectional study was conducted in 2005, comprising 800 school-going children of 10-16 years of age in Nawabshah, through simple random sampling. Data was collected using a structured questionnaire to assess the potential risk factors of stress. A modified version of Perceived stress scale was utilized to measure stress level. SPSS 12 was used for statistical analysis, while multiple linear regression analysis was run to identify the factors associated with stress in the study population. Of the total, 529 (66%) children belonged to state-run schools while 271 (34%) were studying at private facilities. The mean age was 13.7+/-1.3 years. The level of stress was positively associated with the number of siblings, parental conflicts, the age of the mother and the number of rooms in the household. There was decreased level of stress among female adolescents (n=474; 59.3%) who had prior information about pubertal body changes than the boys (n=326; 40.8%). The study showed that stress among adolescents can be reduced by modifying socio-economic and demographic factors.

  4. Towards a better prediction of peak concentration, volume of distribution and half-life after oral drug administration in man, using allometry.

    PubMed

    Sinha, Vikash K; Vaarties, Karin; De Buck, Stefan S; Fenu, Luca A; Nijsen, Marjoleen; Gilissen, Ron A H J; Sanderson, Wendy; Van Uytsel, Kelly; Hoeben, Eva; Van Peer, Achiel; Mackie, Claire E; Smit, Johan W

    2011-05-01

    It is imperative that new drugs demonstrate adequate pharmacokinetic properties, allowing an optimal safety margin and convenient dosing regimens in clinical practice, which then lead to better patient compliance. Such pharmacokinetic properties include suitable peak (maximum) plasma drug concentration (C(max)), area under the plasma concentration-time curve (AUC) and a suitable half-life (t(½)). The C(max) and t(½) following oral drug administration are functions of the oral clearance (CL/F) and apparent volume of distribution during the terminal phase by the oral route (V(z)/F), each of which may be predicted and combined to estimate C(max) and t(½). Allometric scaling is a widely used methodology in the pharmaceutical industry to predict human pharmacokinetic parameters such as clearance and volume of distribution. In our previous published work, we have evaluated the use of allometry for prediction of CL/F and AUC. In this paper we describe the evaluation of different allometric scaling approaches for the prediction of C(max), V(z)/F and t(½) after oral drug administration in man. Twenty-nine compounds developed at Janssen Research and Development (a division of Janssen Pharmaceutica NV), covering a wide range of physicochemical and pharmacokinetic properties, were selected. The C(max) following oral dosing of a compound was predicted using (i) simple allometry alone; (ii) simple allometry along with correction factors such as plasma protein binding (PPB), maximum life-span potential or brain weight (reverse rule of exponents, unbound C(max) approach); and (iii) an indirect approach using allometrically predicted CL/F and V(z)/F and absorption rate constant (k(a)). The k(a) was estimated from (i) in vivo pharmacokinetic experiments in preclinical species; and (ii) predicted effective permeability in man (P(eff)), using a Caco-2 permeability assay. The V(z)/F was predicted using allometric scaling with or without PPB correction. The t(½) was estimated from the allometrically predicted parameters CL/F and V(z)/F. Predictions were deemed adequate when errors were within a 2-fold range. C(max) and t(½) could be predicted within a 2-fold error range for 59% and 66% of the tested compounds, respectively, using allometrically predicted CL/F and V(z)/F. The best predictions for C(max) were obtained when k(a) values were calculated from the Caco-2 permeability assay. The V(z)/F was predicted within a 2-fold error range for 72% of compounds when PPB correction was applied as the correction factor for scaling. We conclude that (i) C(max) and t(½) are best predicted by indirect scaling approaches (using allometrically predicted CL/F and V(z)/F and accounting for k(a) derived from permeability assay); and (ii) the PPB is an important correction factor for the prediction of V(z)/F by using allometric scaling. Furthermore, additional work is warranted to understand the mechanisms governing the processes underlying determination of C(max) so that the empirical approaches can be fine-tuned further.

  5. A PORTRAIT OF COLD GAS IN GALAXIES AT 60 pc RESOLUTION AND A SIMPLE METHOD TO TEST HYPOTHESES THAT LINK SMALL-SCALE ISM STRUCTURE TO GALAXY-SCALE PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroy, Adam K.; Hughes, Annie; Schruba, Andreas

    2016-11-01

    The cloud-scale density, velocity dispersion, and gravitational boundedness of the interstellar medium (ISM) vary within and among galaxies. In turbulent models, these properties play key roles in the ability of gas to form stars. New high-fidelity, high-resolution surveys offer the prospect to measure these quantities across galaxies. We present a simple approach to make such measurements and to test hypotheses that link small-scale gas structure to star formation and galactic environment. Our calculations capture the key physics of the Larson scaling relations, and we show good correspondence between our approach and a traditional “cloud properties” treatment. However, we argue thatmore » our method is preferable in many cases because of its simple, reproducible characterization of all emission. Using, low- J {sup 12}CO data from recent surveys, we characterize the molecular ISM at 60 pc resolution in the Antennae, the Large Magellanic Cloud (LMC), M31, M33, M51, and M74. We report the distributions of surface density, velocity dispersion, and gravitational boundedness at 60 pc scales and show galaxy-to-galaxy and intragalaxy variations in each. The distribution of flux as a function of surface density appears roughly lognormal with a 1 σ width of ∼0.3 dex, though the center of this distribution varies from galaxy to galaxy. The 60 pc resolution line width and molecular gas surface density correlate well, which is a fundamental behavior expected for virialized or free-falling gas. Varying the measurement scale for the LMC and M31, we show that the molecular ISM has higher surface densities, lower line widths, and more self-gravity at smaller scales.« less

  6. Investigation of shear damage considering the evolution of anisotropy

    NASA Astrophysics Data System (ADS)

    Kweon, S.

    2013-12-01

    The damage that occurs in shear deformations in view of anisotropy evolution is investigated. It is widely believed in the mechanics research community that damage (or porosity) does not evolve (increase) in shear deformations since the hydrostatic stress in shear is zero. This paper proves that the above statement can be false in large deformations of simple shear. The simulation using the proposed anisotropic ductile fracture model (macro-scale) in this study indicates that hydrostatic stress becomes nonzero and (thus) porosity evolves (increases or decreases) in the simple shear deformation of anisotropic (orthotropic) materials. The simple shear simulation using a crystal plasticity based damage model (meso-scale) shows the same physics as manifested in the above macro-scale model that porosity evolves due to the grain-to-grain interaction, i.e., due to the evolution of anisotropy. Through a series of simple shear simulations, this study investigates the effect of the evolution of anisotropy, i.e., the rotation of the orthotropic axes onto the damage (porosity) evolution. The effect of the evolutions of void orientation and void shape onto the damage (porosity) evolution is investigated as well. It is found out that the interaction among porosity, the matrix anisotropy and void orientation/shape plays a crucial role in the ductile damage of porous materials.

  7. Defining Simple nD Operations Based on Prosmatic nD Objects

    NASA Astrophysics Data System (ADS)

    Arroyo Ohori, K.; Ledoux, H.; Stoter, J.

    2016-10-01

    An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.

  8. The London handicap scale: a re-evaluation of its validity using standard scoring and simple summation.

    PubMed

    Jenkinson, C; Mant, J; Carter, J; Wade, D; Winner, S

    2000-03-01

    To assess the validity of the London handicap scale (LHS) using a simple unweighted scoring system compared with traditional weighted scoring 323 patients admitted to hospital with acute stroke were followed up by interview 6 months after their stroke as part of a trial looking at the impact of a family support organiser. Outcome measures included the six item LHS, the Dartmouth COOP charts, the Frenchay activities index, the Barthel index, and the hospital anxiety and depression scale. Patients' handicap score was calculated both using the standard procedure (with weighting) for the LHS, and using a simple summation procedure without weighting (U-LHS). Construct validity of both LHS and U-LHS was assessed by testing their correlations with the other outcome measures. Cronbach's alpha for the LHS was 0.83. The U-LHS was highly correlated with the LHS (r=0.98). Correlation of U-LHS with the other outcome measures gave very similar results to correlation of LHS with these measures. Simple summation scoring of the LHS does not lead to any change in the measurement properties of the instrument compared with standard weighted scoring. Unweighted scores are easier to calculate and interpret, so it is recommended that these are used.

  9. A New Technique for Personality Scale Construction. Preliminary Findings.

    ERIC Educational Resources Information Center

    Schaffner, Paul E.; Darlington, Richard B.

    Most methods of personality scale construction have clear statistical disadvantages. A hybrid method (Darlington and Bishop, 1966) was found to increase scale validity more than any other method, with large item pools. A simple modification of the Darlington-Bishop method (algebraically and conceptually similar to ridge regression, but…

  10. Regimes of stability and scaling relations for the removal time in the asteroid belt: a simple kinetic model and numerical tests

    NASA Astrophysics Data System (ADS)

    Cubrovic, Mihailo

    2005-02-01

    We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.

  11. SimpleBox 4.0: Improving the model while keeping it simple….

    PubMed

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire

    PubMed

    Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra

    2018-05-29

    Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Methodological and cross sectional study. A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain.

  13. Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire

    PubMed Central

    Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra

    2018-01-01

    Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain. PMID:29843496

  14. Graphene thickness dependent adhesion force and its correlation to surface roughness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourzand, Hoorad; Tabib-Azar, Massood, E-mail: azar.m@utah.edu; Biomedical Engineering, University of Utah, Salt Lake City, Utah 84112

    2014-04-28

    In this paper, adhesion force of graphene layers on 300 nm silicon oxide is studied. A simple model for measuring adhesion force for a flat surface with sub-nanometer roughness was developed and is shown that small surface roughness decreases adhesion force while large roughness results in an effectively larger adhesion forces. We also show that surface roughness over scales comparable to the tip radius increase by nearly a factor of two, the effective adhesion force measured by the atomic force microscopy. Thus, we demonstrate that surface roughness is an important parameter that should be taken into account in analyzing the adhesionmore » force measurement results.« less

  15. Mechanisms of jamming in the Nagel-Schreckenberg model for traffic flow.

    PubMed

    Bette, Henrik M; Habel, Lars; Emig, Thorsten; Schreckenberg, Michael

    2017-01-01

    We study the Nagel-Schreckenberg cellular automata model for traffic flow by both simulations and analytical techniques. To better understand the nature of the jamming transition, we analyze the fraction of stopped cars P(v=0) as a function of the mean car density. We present a simple argument that yields an estimate for the free density where jamming occurs, and show satisfying agreement with simulation results. We demonstrate that the fraction of jammed cars P(v∈{0,1}) can be decomposed into the three factors (jamming rate, jam lifetime, and jam size) for which we derive, from random walk arguments, exponents that control their scaling close to the critical density.

  16. Mechanisms of jamming in the Nagel-Schreckenberg model for traffic flow

    NASA Astrophysics Data System (ADS)

    Bette, Henrik M.; Habel, Lars; Emig, Thorsten; Schreckenberg, Michael

    2017-01-01

    We study the Nagel-Schreckenberg cellular automata model for traffic flow by both simulations and analytical techniques. To better understand the nature of the jamming transition, we analyze the fraction of stopped cars P (v =0 ) as a function of the mean car density. We present a simple argument that yields an estimate for the free density where jamming occurs, and show satisfying agreement with simulation results. We demonstrate that the fraction of jammed cars P (v ∈{0 ,1 }) can be decomposed into the three factors (jamming rate, jam lifetime, and jam size) for which we derive, from random walk arguments, exponents that control their scaling close to the critical density.

  17. Air-tolerant Fabrication and Enhanced Thermoelectric Performance of n-Type Single-walled Carbon Nanotubes Encapsulating 1,1'-Bis(diphenylphosphino)ferrocene.

    PubMed

    Nonoguchi, Yoshiyuki; Iihara, Yu; Ohashi, Kenji; Murayama, Tomoko; Kawai, Tsuyoshi

    2016-09-06

    The thermally-triggered n-type doping of single-walled carbon nanotubes is demonstrated using 1,1'-bis(diphenylphosphino)ferrocene, a novel n-type dopant. Through a simple thermal vacuum process, the phosphine compounds are moderately encapsulated inside single-walled carbon nanotubes. The encapsulation into SWNTs is carefully characterized using Raman/X-ray spectroscopy and transmission electron microscopy. This easy-to-handle doping with air-stable precursors for n-type SWNTs enables the large-scale fabrication of thermoelectric materials showing an excellent power factor exceeding approximately 240 μW mK(-2) . © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Scaling up Effects in the Organic Laboratory

    ERIC Educational Resources Information Center

    Persson, Anna; Lindstrom, Ulf M.

    2004-01-01

    A simple and effective way of exposing chemistry students to some of the effects of scaling up an organic reaction is described. It gives the student an experience that may encounter in an industrial setting.

  19. Extracting near-surface QL between 1-4 Hz from higher-order noise correlations in the Euroseistest area, Greece

    NASA Astrophysics Data System (ADS)

    Haendel, A.; Ohrnberger, M.; Krüger, F.

    2016-11-01

    Knowledge of the quality factor of near-surface materials is of fundamental interest in various applications. Attenuation can be very strong close to the surface and thus needs to be properly assessed. In recent years, several researchers have studied the retrieval of attenuation coefficients from the cross correlation of ambient seismic noise. Yet, the determination of exact amplitude information from noise-correlation functions is, in contrast to the extraction of traveltimes, not trivial. Most of the studies estimated attenuation coefficients on the regional scale and within the microseism band. In this paper, we investigate the possibility to derive attenuation coefficients from seismic noise at much shallower depths and higher frequencies (>1 Hz). The Euroseistest area in northern Greece offers ideal conditions to study quality factor retrieval from ambient noise for different rock types. Correlations are computed between the stations of a small scale array experiment (station spacings <2 km) that was carried out in the Euroseistest area in 2011. We employ the correlation of the coda of the correlation (C3) method instead of simple cross correlations to mitigate the effect of uneven noise source distributions on the correlation amplitude. Transient removal and temporal flattening are applied instead of 1-bit normalization in order to retain relative amplitudes. The C3 method leads to improved correlation results (higher signal-to-noise ratio and improved time symmetry) compared to simple cross correlations. The C3 functions are rotated from the ZNE to the ZRT system and we focus on Love wave arrivals on the transverse component and on Love wave quality factors QL. The analysis is performed for selected stations being either situated on soft soil or on weathered rock. Phase slowness is extracted using a slant-stack method. Attenuation parameters are inferred by inspecting the relative amplitude decay of Love waves with increasing interstation distance. We observe that the attenuation coefficient γ and QL can be reliably extracted for stations situated on soft soil whereas the derivation of attenuation parameters is more problematic for stations that are located on weathered rock. The results are in acceptable conformance with theoretical Love wave attenuation curves that were computed using 1-D shear wave velocity and quality factor profiles from the Euroseistest area.

  20. Dynamical Mass Measurements of Contaminated Galaxy Clusters Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Ntampaka, M.; Trac, H.; Sutherland, D. J.; Fromenteau, S.; Póczos, B.; Schneider, J.

    2016-11-01

    We study dynamical mass measurements of galaxy clusters contaminated by interlopers and show that a modern machine learning algorithm can predict masses by better than a factor of two compared to a standard scaling relation approach. We create two mock catalogs from Multidark’s publicly available N-body MDPL1 simulation, one with perfect galaxy cluster membership information and the other where a simple cylindrical cut around the cluster center allows interlopers to contaminate the clusters. In the standard approach, we use a power-law scaling relation to infer cluster mass from galaxy line-of-sight (LOS) velocity dispersion. Assuming perfect membership knowledge, this unrealistic case produces a wide fractional mass error distribution, with a width of {{Δ }}ε ≈ 0.87. Interlopers introduce additional scatter, significantly widening the error distribution further ({{Δ }}ε ≈ 2.13). We employ the support distribution machine (SDM) class of algorithms to learn from distributions of data to predict single values. Applied to distributions of galaxy observables such as LOS velocity and projected distance from the cluster center, SDM yields better than a factor-of-two improvement ({{Δ }}ε ≈ 0.67) for the contaminated case. Remarkably, SDM applied to contaminated clusters is better able to recover masses than even the scaling relation approach applied to uncontaminated clusters. We show that the SDM method more accurately reproduces the cluster mass function, making it a valuable tool for employing cluster observations to evaluate cosmological models.

  1. Estimation of root zone storage capacity at the catchment scale using improved Mass Curve Technique

    NASA Astrophysics Data System (ADS)

    Zhao, Jie; Xu, Zongxue; Singh, Vijay P.

    2016-09-01

    The root zone storage capacity (Sr) greatly influences runoff generation, soil water movement, and vegetation growth and is hence an important variable for ecological and hydrological modelling. However, due to the great heterogeneity in soil texture and structure, there seems to be no effective approach to monitor or estimate Sr at the catchment scale presently. To fill the gap, in this study the Mass Curve Technique (MCT) was improved by incorporating a snowmelt module for the estimation of Sr at the catchment scale in different climatic regions. The "range of perturbation" method was also used to generate different scenarios for determining the sensitivity of the improved MCT-derived Sr to its influencing factors after the evaluation of plausibility of Sr derived from the improved MCT. Results can be showed as: (i) Sr estimates of different catchments varied greatly from ∼10 mm to ∼200 mm with the changes of climatic conditions and underlying surface characteristics. (ii) The improved MCT is a simple but powerful tool for the Sr estimation in different climatic regions of China, and incorporation of more catchments into Sr comparisons can further improve our knowledge on the variability of Sr. (iii) Variation of Sr values is an integrated consequence of variations in rainfall, snowmelt water and evapotranspiration. Sr values are most sensitive to variations in evapotranspiration of ecosystems. Besides, Sr values with a longer return period are more stable than those with a shorter return period when affected by fluctuations in its influencing factors.

  2. Motor impairment predicts falls in specialized Alzheimer care units.

    PubMed

    Camicioli, Richard; Licis, Lisa

    2004-01-01

    We sought to identify clinical risk factors for falls in people with advanced Alzheimer disease (AD) in a prospective longitudinal observational study set in specialized AD care units. Forty-two patients with probable or possible AD were recruited. Age, sex, Mini-Mental Status Examination, Clinical Dementia Rating Scale, Neuropsychiatric Inventory/Nursing Home, Morse Fall Scale (MFS), modified Unified Parkinson's Rating Scale (mUPDRS), and gait parameters using a GAITRite Gold Walkway System with and without dual-task performance were examined. Time to a first fall was the primary outcome measure, and independent risk factors were identified. Participating subjects were old (non-fallers age, 82.3 +/- 6.7 years; fallers: 83.1 +/- 9.6 years; p = 0.76) and predominantly women (36 female/6 male). Fallers did not differ from non-fallers on any parameter except the MFS (non-fallers: 35.6 +/- 26.1; fallers: 54.4 +/- 29.8; p = 0.04), the UPDRS (non-fallers: 4.75 +/- 3.98; fallers: 7.61 +/- 4.3, p = 0.03) and cadence (steps per minute: non-fallers: 102.3 +/- 12.3; fallers: 91.7 +/- 16, p = 0.02). Fallers and non-fallers were equally affected by dual-task performance. The hazard ratios for MFS, UPDRS, and cadence were not affected by adjusting for age, sex, MMSE, or NPI scores. In conclusion, falls in advanced AD can be predicted using simple clinical measures of motor impairment or cadence. These measures may be useful for targeting interventions.

  3. Turbulent Plume Dispersion over Two-dimensional Idealized Urban Street Canyons

    NASA Astrophysics Data System (ADS)

    Wong, C. C. C.; Liu, C. H.

    2012-04-01

    Human activities are the primary pollutant sources which degrade the living quality in the current era of dense and compact cities. A simple and reasonably accurate pollutant dispersion model is helpful to reduce pollutant concentrations in city or neighborhood scales by refining architectural design or urban planning. The conventional method to estimate the pollutant concentration from point/line sources is the Gaussian plume model using empirical dispersion coefficients. Its accuracy is pretty well for applying to rural areas. However, the dispersion coefficients only account for the atmospheric stability and streamwise distance that often overlook the roughness of urban surfaces. Large-scale buildings erected in urban areas significantly modify the surface roughness that in turn affects the pollutant transport in the urban canopy layer (UCL). We hypothesize that the aerodynamic resistance is another factor governing the dispersion coefficient in the UCL. This study is thus conceived to study the effects of urban roughness on pollutant dispersion coefficients and the plume behaviors. Large-eddy simulations (LESs) are carried out to examine the plume dispersion from a ground-level pollutant source over idealized 2D street canyons in neutral stratification. Computations with a wide range of aspect ratios (ARs), including skimming flow to isolated flow regimes, are conducted. The vertical profiles of pollutant distribution for different values of friction factor are compared that all reach a self-similar Gaussian shape. Preliminary results show that the pollutant dispersion is closely related to the friction factor. For relatively small roughness, the factors of dispersion coefficient vary linearly with the friction factor until the roughness is over a certain level. When the friction factor is large, its effect on the dispersion coefficient is less significant. Since the linear region covers at least one-third of the full range of friction factor in our empirical analysis, urban roughness is a major factor for dispersion coefficient. The downstream air quality could then be a function of both atmospheric stability and urban roughness.

  4. How fast do living organisms move: Maximum speeds from bacteria to elephants and whales

    NASA Astrophysics Data System (ADS)

    Meyer-Vernet, Nicole; Rospars, Jean-Pierre

    2015-08-01

    Despite their variety and complexity, living organisms obey simple scaling laws due to the universality of the laws of physics. In the present paper, we study the scaling between maximum speed and size, from bacteria to the largest mammals. While the preferred speed has been widely studied in the framework of Newtonian mechanics, the maximum speed has rarely attracted the interest of physicists, despite its remarkable scaling property; it is roughly proportional to length throughout nearly the whole range of running and swimming organisms. We propose a simple order-of-magnitude interpretation of this ubiquitous relationship, based on physical properties shared by life forms of very different body structure and varying by more than 20 orders of magnitude in body mass.

  5. Simple scaling of catastrophic landslide dynamics.

    PubMed

    Ekström, Göran; Stark, Colin P

    2013-03-22

    Catastrophic landslides involve the acceleration and deceleration of millions of tons of rock and debris in response to the forces of gravity and dissipation. Their unpredictability and frequent location in remote areas have made observations of their dynamics rare. Through real-time detection and inverse modeling of teleseismic data, we show that landslide dynamics are primarily determined by the length scale of the source mass. When combined with geometric constraints from satellite imagery, the seismically determined landslide force histories yield estimates of landslide duration, momenta, potential energy loss, mass, and runout trajectory. Measurements of these dynamical properties for 29 teleseismogenic landslides are consistent with a simple acceleration model in which height drop and rupture depth scale with the length of the failing slope.

  6. Scaling up digital circuit computation with DNA strand displacement cascades.

    PubMed

    Qian, Lulu; Winfree, Erik

    2011-06-03

    To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.

  7. Active earth pressure model tests versus finite element analysis

    NASA Astrophysics Data System (ADS)

    Pietrzak, Magdalena

    2017-06-01

    The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.

  8. Scale invariance of temporal order discrimination using complex, naturalistic events

    PubMed Central

    Kwok, Sze Chai; Macaluso, Emiliano

    2015-01-01

    Recent demonstrations of scale invariance in cognitive domains prompted us to investigate whether a scale-free pattern might exist in retrieving the temporal order of events from episodic memory. We present four experiments using an encoding-retrieval paradigm with naturalistic stimuli (movies or video clips). Our studies show that temporal order judgement retrieval times were negatively correlated with the temporal separation between two events in the movie. This relation held, irrespective of whether temporal distances were on the order of tens of minutes (Exp 1−2) or just a few seconds (Exp 3−4). Using the SIMPLE model, we factored in the retention delays between encoding and retrieval (delays of 24 h, 15 min, 1.5–2.5 s, and 0.5 s for Exp 1–4, respectively) and computed a temporal similarity score for each trial. We found a positive relation between similarity and retrieval times; that is, the more temporally similar two events, the slower the retrieval of their temporal order. Using Bayesian analysis, we confirmed the equivalence of the RT/similarity relation across all experiments, which included a vast range of temporal distances and retention delays. These results provide evidence for scale invariance during the retrieval of temporal order of episodic memories. PMID:25909581

  9. Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience

    USGS Publications Warehouse

    Hooper, R.P.

    2001-01-01

    A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.

  10. Translation, cultural adaptation, cross-validation of the Turkish diabetes quality-of-life (DQOL) measure.

    PubMed

    Yildirim, Aysegul; Akinci, Fevzi; Gozu, Hulya; Sargin, Haluk; Orbay, Ekrem; Sargin, Mehmet

    2007-06-01

    The aim of this study was to test the validity and reliability of the Turkish version of the diabetes quality of life (DQOL) questionnaire for use with patients with diabetes. Turkish version of the generic quality of life (QoL) scale 15D and DQOL, socio-demographics and clinical parameter characteristics were administered to 150 patients with type 2 diabetes. Study participants were randomly sampled from the Endocrinology and Diabetes Outpatient Department of Dr. Lutfi Kirdar Kartal Education and Research Hospital in Istanbul, Turkey. The Cronbach alpha coefficient of the overall DQOL scale was 0.89; the Cronbach alpha coefficient ranged from 0.80 to 0.94 for subscales. Distress, discomfort and its symptoms, depression, mobility, usual activities, and vitality on the 15 D scale had statistically significant correlations with social/vocational worry and diabetes-related worry on the DQOL scale indicating good convergent validity. Factor analysis identified four subscales: satisfaction", impact", "diabetes-related worry", and "social/vocational worry". Statistical analyses showed that the Turkish version of the DQOL is a valid and reliable instrument to measure disease related QoL in patients with diabetes. It is a simple and quick screening tool with about 15 +/- 5.8 min administration time for measuring QoL in this population.

  11. Assessing health-related quality of life in adolescents: some psychometric properties of the first Norwegian version of KINDL.

    PubMed

    Helseth, Sølvi; Lund, Thorleif

    2005-06-01

    The study presented in this paper is part of a larger Norwegian investigation among adolescents, where the overall aim is to develop methods to promote their quality of life (QoL), to discover risk factors or threats to adolescents' well-being, and finally to prevent the negative effects of such factors. An adequate generic health-related quality of life (HR-QoL) measure is therefore needed. However, only a limited number of well validated instruments that measure HR-QoL in adolescents exist, and to date only a few has been translated into Norwegian. The purpose of this study was therefore to examine some psychometric properties of the first Norwegian version of a simple, generic, German HR-QoL questionnaire for adolescents, KINDL. The instrument consists of 24-items, distributed in six subscales, which correspond to six domains of adolescents' HR-QoL. Based on a sample of 239 healthy adolescents, the internal consistency reliability is satisfactory for both the total scale and the subscales of 'Self-esteem and Family', fairly good for the 'Emotional' subscale, but lower for the subscales 'Physical', 'Friends' and 'School'. Factor analyses, which concerns construct validity, yielded interpretable solutions. The factor solutions at item level were interpreted to be in line with the original subscales, while factor analysis at subscale level indicated that a common QoL core is involved. To conclude, the Norwegian version of KINDL appears, in general, to be a psychometrically acceptable method of measuring HR-QoL in healthy adolescents. However, the alpha-values of some of the subscales are not optimal, and these scales should be used with caution in research and profession. Still KINDL-N is considered suitable for screening purposes in the public health area and especially within school-health care.

  12. Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.

    PubMed

    Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin

    2010-05-12

    Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.

  13. Development of a molecular method for testing the effectiveness of UV systems on-site.

    PubMed

    Nizri, Limor; Vaizel-Ohayon, Dalit; Ben-Amram, Hila; Sharaby, Yehonatan; Halpern, Malka; Mamane, Hadas

    2017-12-15

    We established a molecular method for quantifying ultraviolet (UV) disinfection efficacy using total bacterial DNA in a water sample. To evaluate UV damage to the DNA, we developed the "DNA damage" factor, which is a novel cultivation-independent approach that reveals UV-exposure efficiency by applying a simple PCR amplification method. The study's goal was to prove the feasibility of this method for demonstrating the efficiency of UV systems in the field using flow-through UV reactors. In laboratory-based experiments using seeded bacteria, the DNA damage tests demonstrated a good correlation between PCR products and UV dose. In the field, natural groundwater sampled before and after being subjected to the full-scale UV reactors was filtered, and the DNA extracted from the filtrate was subjected to PCR amplification for a 900-bp fragment of the 16S rRNA gene with initial DNA concentrations of 0.1 and 1 ng/μL. In both cases, the UV dose predicted and explained a significant proportion of the variance in the log inactivation ratio and DNA damage factor. Log inactivation ratio was very low, as expected in groundwater due to low initial bacterial counts, whereas the DNA damage factor was within the range of values obtained in the laboratory-based experiments. Consequently, the DNA damage factor reflected the true performance of the full-scale UV system during operational water flow by using the indigenous bacterial array present in a water sample. By applying this method, we were able to predict with high confidence, the UV reactor inactivation potential. For method validation, laboratory and field iterations are required to create a practical field calibration curve that can be used to determine the expected efficiency of the full-scale UV system in the field under actual operation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Reliability and Validity of the Visual, Musculoskeletal, and Balance Complaints Questionnaire.

    PubMed

    Lundqvist, Lars-Olov; Zetterlund, Christina; Richter, Hans O

    2016-09-01

    To evaluate the reliability and validity of the 15-item Visual, Musculoskeletal, and Balance Complaints Questionnaire (VMB) for people with visual impairments, using confirmatory factor analysis (CFA) and with Rasch analysis for use as an outcome measure. Two studies evaluated the VMB. In Study 1, VMB data were collected from 1249 out of 3063 individuals between 18 and 104 years old who were registered at a low vision center. CFA evaluated VMB factor structure and Rasch analysis evaluated VMB scale properties. In Study 2, a subsample of 52 individuals between 27 and 67 years old with visual impairments underwent further measurements. Visual clinical assessments, neck/scapular pain, and balance assessments were collected to evaluate the convergent validity of the VMB (i.e. the domain relationship with other, theoretically predicted measures). CFA supported the a priori three-factor structure of the VMB. The factor loadings of the items on their respective domains were all statistically significant. Rasch analysis indicated disordered categories and the original 10-point scale was subsequently replaced with a 5-point scale. Each VMB domain fitted the Rasch model, showing good metric properties, including unidimensionality (explained variances ≥66% and eigenvalues <1.9), person separation (1.86 to 2.29), reliability (0.87 to 0.94), item fit (infit MnSq's >0.72 and outfit MnSq's <1.47), targeting (0.30 to 0.50 logits), and insignificant differential item functioning (all DIFs but one <0.50 logits) from gender, age, and visual status. The three VMB domains correlated significantly with relevant visual, musculoskeletal, and balance assessments, demonstrating adequate convergent validity of the VMB. The VMB is a simple, inexpensive, and quick yet reliable and valid way to screen and evaluate concurrent visual, musculoskeletal, and balance complaints, with contribution to epidemiological and intervention research and potential clinical implications for the field of health services and low vision rehabilitation.

  15. Reliability and Validity of the Visual, Musculoskeletal, and Balance Complaints Questionnaire

    PubMed Central

    Lundqvist, Lars-Olov; Zetterlund, Christina; Richter, Hans O.

    2016-01-01

    ABSTRACT Purpose To evaluate the reliability and validity of the 15-item Visual, Musculoskeletal, and Balance Complaints Questionnaire (VMB) for people with visual impairments, using confirmatory factor analysis (CFA) and with Rasch analysis for use as an outcome measure. Methods Two studies evaluated the VMB. In Study 1, VMB data were collected from 1249 out of 3063 individuals between 18 and 104 years old who were registered at a low vision center. CFA evaluated VMB factor structure and Rasch analysis evaluated VMB scale properties. In Study 2, a subsample of 52 individuals between 27 and 67 years old with visual impairments underwent further measurements. Visual clinical assessments, neck/scapular pain, and balance assessments were collected to evaluate the convergent validity of the VMB (i.e. the domain relationship with other, theoretically predicted measures). Results CFA supported the a priori three-factor structure of the VMB. The factor loadings of the items on their respective domains were all statistically significant. Rasch analysis indicated disordered categories and the original 10-point scale was subsequently replaced with a 5-point scale. Each VMB domain fitted the Rasch model, showing good metric properties, including unidimensionality (explained variances ≥66% and eigenvalues <1.9), person separation (1.86 to 2.29), reliability (0.87 to 0.94), item fit (infit MnSq’s >0.72 and outfit MnSq’s <1.47), targeting (0.30 to 0.50 logits), and insignificant differential item functioning (all DIFs but one <0.50 logits) from gender, age, and visual status. The three VMB domains correlated significantly with relevant visual, musculoskeletal, and balance assessments, demonstrating adequate convergent validity of the VMB. Conclusions The VMB is a simple, inexpensive, and quick yet reliable and valid way to screen and evaluate concurrent visual, musculoskeletal, and balance complaints, with contribution to epidemiological and intervention research and potential clinical implications for the field of health services and low vision rehabilitation. PMID:27309524

  16. Novel, compact, and simple ND:YVO4 laser with 12 W of CW optical output power and good beam quality

    NASA Astrophysics Data System (ADS)

    Zimer, H.; Langer, B.; Wittrock, U.; Heine, F.; Hildebrandt, U.; Seel, S.; Lange, R.

    2017-11-01

    We present first, promising experiments with a novel, compact and simple Nd:YVO4 slab laser with 12 W of 1.06 μm optical output power and a beam quality factor M2 2.5. The laser is made of a diffusion-bonded YVO4/Nd:YVO4 composite crystal that exhibits two unique features. First, it ensures a one-dimensional heat removal from the laser crystal, which leads to a temperature profile without detrimental influence on the laser beam. Thus, the induced thermo-optical aberrations to the laser field are low, allowing power scaling with good beam quality. Second, the composite crystal itself acts as a waveguide for the 809 nm pump-light that is supplied from a diode laser bar. Pump-light shaping optics, e.g. fast- or slow-axis collimators can be omitted, reducing the complexity of the system. Pump-light redundancy can be easily achieved. Eventually, the investigated slab laser might be suitable for distortion-free high gain amplification of weak optical signals.

  17. Entanglement-Assisted Weak Value Amplification

    NASA Astrophysics Data System (ADS)

    Pang, Shengshi; Dressel, Justin; Brun, Todd A.

    2014-07-01

    Large weak values have been used to amplify the sensitivity of a linear response signal for detecting changes in a small parameter, which has also enabled a simple method for precise parameter estimation. However, producing a large weak value requires a low postselection probability for an ancilla degree of freedom, which limits the utility of the technique. We propose an improvement to this method that uses entanglement to increase the efficiency. We show that by entangling and postselecting n ancillas, the postselection probability can be increased by a factor of n while keeping the weak value fixed (compared to n uncorrelated attempts with one ancilla), which is the optimal scaling with n that is expected from quantum metrology. Furthermore, we show the surprising result that the quantum Fisher information about the detected parameter can be almost entirely preserved in the postselected state, which allows the sensitive estimation to approximately saturate the relevant quantum Cramér-Rao bound. To illustrate this protocol we provide simple quantum circuits that can be implemented using current experimental realizations of three entangled qubits.

  18. DBI-essence

    NASA Astrophysics Data System (ADS)

    Martin, Jérôme; Yamaguchi, Masahide

    2008-06-01

    Models where the dark energy is a scalar field with a nonstandard Dirac-Born-Infeld (DBI) kinetic term are investigated. Scaling solutions are studied and proven to be attractors. The corresponding shape of the brane tension and of the potential is also determined and found to be, as in the standard case, either exponentials or power law of the DBI field. In these scenarios, in contrast to the standard situation, the vacuum expectation value of the field at small redshifts can be small in comparison to the Planck mass which could be an advantage from the model building point of view. This situation arises when the present-day value of the Lorentz factor is large, this property being per se interesting. Serious shortcomings are also present such as the fact that, for simple potentials, the equation of state appears to be too far from the observational favored value -1. Another problem is that, although simple stringy-inspired models precisely lead to the power-law shape that has been shown to possess a tracking behavior, the power index turns out to have the wrong sign. Possible solutions to these issues are discussed.

  19. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  20. A simple scaling approach to produce climate scenarios of local precipitation extremes for the Netherlands

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Attema, Jisk

    2015-08-01

    Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.

  1. Characteristic Sizes of Life in the Oceans, from Bacteria to Whales.

    PubMed

    Andersen, K H; Berge, T; Gonçalves, R J; Hartvig, M; Heuschele, J; Hylander, S; Jacobsen, N S; Lindemann, C; Martens, E A; Neuheimer, A B; Olsson, K; Palacz, A; Prowe, A E F; Sainmont, J; Traving, S J; Visser, A W; Wadhwa, N; Kiørboe, T

    2016-01-01

    The size of an individual organism is a key trait to characterize its physiology and feeding ecology. Size-based scaling laws may have a limited size range of validity or undergo a transition from one scaling exponent to another at some characteristic size. We collate and review data on size-based scaling laws for resource acquisition, mobility, sensory range, and progeny size for all pelagic marine life, from bacteria to whales. Further, we review and develop simple theoretical arguments for observed scaling laws and the characteristic sizes of a change or breakdown of power laws. We divide life in the ocean into seven major realms based on trophic strategy, physiology, and life history strategy. Such a categorization represents a move away from a taxonomically oriented description toward a trait-based description of life in the oceans. Finally, we discuss life forms that transgress the simple size-based rules and identify unanswered questions.

  2. In silico mining and PCR-based approaches to transcription factor discovery in non-model plants: gene discovery of the WRKY transcription factors in conifers.

    PubMed

    Liu, Jun-Jun; Xiang, Yu

    2011-01-01

    WRKY transcription factors are key regulators of numerous biological processes in plant growth and development, as well as plant responses to abiotic and biotic stresses. Research on biological functions of plant WRKY genes has focused in the past on model plant species or species with largely characterized transcriptomes. However, a variety of non-model plants, such as forest conifers, are essential as feed, biofuel, and wood or for sustainable ecosystems. Identification of WRKY genes in these non-model plants is equally important for understanding the evolutionary and function-adaptive processes of this transcription factor family. Because of limited genomic information, the rarity of regulatory gene mRNAs in transcriptomes, and the sequence divergence to model organism genes, identification of transcription factors in non-model plants using methods similar to those generally used for model plants is difficult. This chapter describes a gene family discovery strategy for identification of WRKY transcription factors in conifers by a combination of in silico-based prediction and PCR-based experimental approaches. Compared to traditional cDNA library screening or EST sequencing at transcriptome scales, this integrated gene discovery strategy provides fast, simple, reliable, and specific methods to unveil the WRKY gene family at both genome and transcriptome levels in non-model plants.

  3. On estimating scale invariance in stratocumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Seze, Genevieve; Smith, Leonard A.

    1990-01-01

    Examination of cloud radiance fields derived from satellite observations sometimes indicates the existence of a range of scales over which the statistics of the field are scale invariant. Many methods were developed to quantify this scaling behavior in geophysics. The usefulness of such techniques depends both on the physics of the process being robust over a wide range of scales and on the availability of high resolution, low noise observations over these scales. These techniques (area perimeter relation, distribution of areas, estimation of the capacity, d0, through box counting, correlation exponent) are applied to the high resolution satellite data taken during the FIRE experiment and provides initial estimates of the quality of data required by analyzing simple sets. The results of the observed fields are contrasted with those of images of objects with known characteristics (e.g., dimension) where the details of the constructed image simulate current observational limits. Throughout when cloud elements and cloud boundaries are mentioned; it should be clearly understood that by this structures in the radiance field are meant: all the boundaries considered are defined by simple threshold arguments.

  4. Quantum Critical Higgs

    NASA Astrophysics Data System (ADS)

    Bellazzini, Brando; Csáki, Csaba; Hubisz, Jay; Lee, Seung J.; Serra, Javi; Terning, John

    2016-10-01

    The appearance of the light Higgs boson at the LHC is difficult to explain, particularly in light of naturalness arguments in quantum field theory. However, light scalars can appear in condensed matter systems when parameters (like the amount of doping) are tuned to a critical point. At zero temperature these quantum critical points are directly analogous to the finely tuned standard model. In this paper, we explore a class of models with a Higgs near a quantum critical point that exhibits non-mean-field behavior. We discuss the parametrization of the effects of a Higgs emerging from such a critical point in terms of form factors, and present two simple realistic scenarios based on either generalized free fields or a 5D dual in anti-de Sitter space. For both of these models, we consider the processes g g →Z Z and g g →h h , which can be used to gain information about the Higgs scaling dimension and IR transition scale from the experimental data.

  5. Trends in Department of Defense hospital efficiency.

    PubMed

    Ozcan, Y A; Bannick, R R

    1994-04-01

    This study employs a simple cross sectional design using longitudinal data to explore the underlying factors associated with differences in hospital technical efficiency using data envelopment analysis (DEA) in the Department of Defense (DOD) sector across three service components, the Army, Air Force and Navy. The results suggest that the services do not differ significantly in hospital efficiency. Nor does hospital efficiency appear to differ over time. With respect to the efficient use of input resources, the services experienced a general decline in excessive usage of various inputs over the three years. Analysis of the returns to scale captures opportunities for planners of changing the relative mix of output to input slacks for increasing a hospital's efficiency. That is, policy makers would get more immediate "bang per buck" with emphasis on improving the efficiencies of hospitals with higher returns to scale than other hospitals. Findings also suggest a significant degree of comparability between the DEA measure and these measures often used to indicate efficiency.

  6. Validity of the Malay version of the Internet Addiction Test: a study on a group of medical students in Malaysia.

    PubMed

    Guan, Ng Chong; Isa, Saramah Mohammed; Hashim, Aili Hanim; Pillai, Subash Kumar; Harbajan Singh, Manveen Kaur

    2015-03-01

    The use of the Internet has been increasing dramatically over the decade in Malaysia. Excessive usage of the Internet has lead to a phenomenon called Internet addiction. There is a need for a reliable, valid, and simple-to-use scale to measure Internet addiction in the Malaysian population for clinical practice and research purposes. The aim of this study was to validate the Malay version of the Internet Addiction Test, using a sample of 162 medical students. The instrument displayed good internal consistency (Cronbach's α = .91), parallel reliability (intraclass coefficient = .88, P < .001), and concurrent validity with the Compulsive Internet Use Scale (Pearson's correlation = .84, P < .001). Receiver operating characteristic analysis showed that 43 was the optimal cutoff score to discriminate students with and without Internet dependence. Principal component analysis with varimax rotation identified a 5-factor model. The Malay version of the Internet Addiction Test appeared to be a valid instrument for assessing Internet addiction in Malaysian university students. © 2012 APJPH.

  7. GRAVITATIONAL WAVE BACKGROUND FROM BINARY MERGERS AND METALLICITY EVOLUTION OF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakazato, Ken’ichiro; Sago, Norichika; Niino, Yuu, E-mail: nakazato@artsci.kyushu-u.ac.jp

    The cosmological evolution of the binary black hole (BH) merger rate and the energy density of the gravitational wave (GW) background are investigated. To evaluate the redshift dependence of the BH formation rate, BHs are assumed to originate from low-metallicity stars, and the relations between the star formation rate, metallicity and stellar mass of galaxies are combined with the stellar mass function at each redshift. As a result, it is found that when the energy density of the GW background is scaled with the merger rate at the local universe, the scaling factor does not depend on the critical metallicitymore » for the formation of BHs. Also taking into account the merger of binary neutron stars, a simple formula to express the energy spectrum of the GW background is constructed for the inspiral phase. The relation between the local merger rate and the energy density of the GW background will be examined by future GW observations.« less

  8. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium.

    PubMed

    Kapfer, Sebastian C; Krauth, Werner

    2017-12-15

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  9. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    NASA Astrophysics Data System (ADS)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  10. The time scale of quasifission process in reactions with heavy ions

    NASA Astrophysics Data System (ADS)

    Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.

    2014-05-01

    The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. These data have been measured by two time-of-flight CORSET spectrometer. The mass resolution of the spectrometer for these measurements was about 3u. It allows to investigate the features of mass distributions with good accuracy. The properties of mass and TKE of QF fragments in dependence on interaction energy have been investigated and compared with characteristics of the fusion-fission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.

  11. Methodology to predict delayed failure due to slow crack growth in ceramic tubular components using data from simple specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Tressler, R.E.

    1993-04-01

    The methodology to predict the lifetime of sintered [alpha]-silicon carbide (SASC) tubes subjected to slow crack growth (SCG) conditions involved the experimental determination of the SCG parameters of that material and the scaling analysis to project the stress rupture data from small specimens to large components. Dynamic fatigue testing, taking into account the effect of threshold stress intensity factor, of O-ring and compressed C-ring specimens was used to obtain the SCG parameters. These SCG parameters were in excellent agreement with those published in the literature and extracted from stress rupture tests of tensile and bend specimens. Two methods were usedmore » to predict the lifetimes of internally heated and pressurized SASC tubes. The first is a fracture mechanics approach that is well known in the literature. The second method used a scaling analysis in which the stress rupture distribution (lifetime) of any specimen configuration can be predicted from stress rupture data of another.« less

  12. Self-assembled diatom substrates with plasmonic functionality

    NASA Astrophysics Data System (ADS)

    Kwon, Sun Yong; Park, Sehyun; Nichols, William T.

    2014-04-01

    Marine diatoms have an exquisitely complex exoskeleton that is promising for engineered surfaces such as sensors and catalysts. For such applications, creating uniform arrays of diatom frustules across centimeter scales will be necessary. Here, we present a simple, low-cost floating interface technique to self-assemble the diatom frustules. We show that well-prepared diatoms form floating hexagonal close-packed arrays at the air-water interface that can be transferred directly to a substrate. We functionalize the assembled diatom surfaces with gold and characterize the plasmonic functionality by using surface enhanced Raman scattering (SERS). Thin gold films conform to the complex, hierarchical diatom structure and produce a SERS enhancement factor of 2 × 104. Small gold nanoparticles attached to the diatom's surface produce a higher enhancement of 7 × 104 due to stronger localization of the surface plasmons. Taken together, the large-scale assembly and plasmonic functionalization represent a promising platform to control the energy and the material flows at a complex surface for applications such as sensors and plasmonic enhanced catalysts.

  13. Large-Scale Wind-Tunnel Tests of Exhaust Ingestion Due to Thrust Reversal on a Four-Engine Jet Transport during Ground Roll

    NASA Technical Reports Server (NTRS)

    Tolhurst, William H., Jr.; Hickey, David H.; Aoyagi, Kiyoshi

    1961-01-01

    Wind-tunnel tests have been conducted on a large-scale model of a swept-wing jet transport type airplane to study the factors affecting exhaust gas ingestion into the engine inlets when thrust reversal is used during ground roll. The model was equipped with four small jet engines mounted in nacelles beneath the wing. The tests included studies of both cascade and target type reversers. The data obtained included the free-stream velocity at the occurrence of exhaust gas ingestion in the outboard engine and the increment of drag due to thrust reversal for various modifications of thrust reverser configuration. Motion picture films of smoke flow studies were also obtained to supplement the data. The results show that the free-stream velocity at which ingestion occurred in the outboard engines could be reduced considerably, by simple modifications to the reversers, without reducing the effective drag due to reversed thrust.

  14. Relative influence upon microwave emissivity of fine-scale stratigraphy, internal scattering, and dielectric properties

    USGS Publications Warehouse

    England, A.W.

    1976-01-01

    The microwave emissivity of relatively low-loss media such as snow, ice, frozen ground, and lunar soil is strongly influenced by fine-scale layering and by internal scattering. Radiometric data, however, are commonly interpreted using a model of emission from a homogeneous, dielectric halfspace whose emissivity derives exclusively from dielectric properties. Conclusions based upon these simple interpretations can be erroneous. Examples are presented showing that the emission from fresh or hardpacked snow over either frozen or moist soil is governed dominantly by the size distribution of ice grains in the snowpack. Similarly, the thickness of seasonally frozen soil and the concentration of rock clasts in lunar soil noticeably affect, respectively, the emissivities of northern latitude soils in winter and of the lunar regolith. Petrophysical data accumulated in support of the geophysical interpretation of microwave data must include measurements of not only dielectric properties, but also of geometric factors such as finescale layering and size distributions of grains, inclusions, and voids. ?? 1976 Birkha??user Verlag.

  15. On the context-dependent scaling of consumer feeding rates.

    PubMed

    Barrios-O'Neill, Daniel; Kelly, Ruth; Dick, Jaimie T A; Ricciardi, Anthony; MacIsaac, Hugh J; Emmerson, Mark C

    2016-06-01

    The stability of consumer-resource systems can depend on the form of feeding interactions (i.e. functional responses). Size-based models predict interactions - and thus stability - based on consumer-resource size ratios. However, little is known about how interaction contexts (e.g. simple or complex habitats) might alter scaling relationships. Addressing this, we experimentally measured interactions between a large size range of aquatic predators (4-6400 mg over 1347 feeding trials) and an invasive prey that transitions among habitats: from the water column (3D interactions) to simple and complex benthic substrates (2D interactions). Simple and complex substrates mediated successive reductions in capture rates - particularly around the unimodal optimum - and promoted prey population stability in model simulations. Many real consumer-resource systems transition between 2D and 3D interactions, and along complexity gradients. Thus, Context-Dependent Scaling (CDS) of feeding interactions could represent an unrecognised aspect of food webs, and quantifying the extent of CDS might enhance predictive ecology. © The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  16. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  17. Electricity by intermittent sources: An analysis based on the German situation 2012

    NASA Astrophysics Data System (ADS)

    Wagner, Friedrich

    2014-02-01

    The 2012 data of the German load, the on- and offshore and the photo-voltaic energy production are used and scaled to the limit of supplying the annual demand (100% case). The reference mix of the renewable energy (RE) forms is selected such that the remaining back-up energy is minimised. For the 100% case, the RE power installation has to be about 3 times the present peak load. The back-up system can be reduced by 12% in this case. The surplus energy corresponds to 26% of the demand. The back-up system and more so the grid must be able to cope with large power excursions. All components of the electricity supply system operate at low capacity factors. Large-scale storage can hardly be motivated by the effort to further reduce CO2 emission. Demand-side management will intensify the present periods of high economic activities. Its rigorous implementation will expand the economic activities into the weekends. On the basis of a simple criterion, the increase of periods with negative electricity prices in Germany is assessed. It will be difficult with RE to meet the low CO2 emission factors which characterise those European Countries which produce electricity mostly by nuclear and hydro power.

  18. Resonances and wave propagation velocity in the subglottal airways.

    PubMed

    Lulich, Steven M; Alwan, Abeer; Arsikere, Harish; Morton, John R; Sommers, Mitchell S

    2011-10-01

    Previous studies of subglottal resonances have reported findings based on relatively few subjects, and the relations between these resonances, subglottal anatomy, and models of subglottal acoustics are not well understood. In this study, accelerometer signals of subglottal acoustics recorded during sustained [a:] vowels of 50 adult native speakers (25 males, 25 females) of American English were analyzed. The study confirms that a simple uniform tube model of subglottal airways, closed at the glottis and open at the inferior end, is appropriate for describing subglottal resonances. The main findings of the study are (1) whereas the walls may be considered rigid in the frequency range of Sg2 and Sg3, they are yielding and resonant in the frequency range of Sg1, with a resulting ~4/3 increase in wave propagation velocity and, consequently, in the frequency of Sg1; (2) the "acoustic length" of the equivalent uniform tube varies between 18 and 23.5 cm, and is approximately equal to the height of the speaker divided by an empirically determined scaling factor; (3) trachea length can also be predicted by dividing height by another empirically determined scaling factor; and (4) differences between the subglottal resonances of males and females can be accounted for by height-related differences. © 2011 Acoustical Society of America

  19. [Influence of social support and coping style on chronic post-traumatic stress disorder after floods].

    PubMed

    Dai, W J; Chen, L; Tan, H Z; Lai, Z W; Hu, S M; Li, Y; Liu, A Z

    2016-02-01

    To explore the long-term prognosis and influence of social support and coping style of patients with post-traumatic stress disorder (PTSD) after suffering from floods. Patients suffered PTSD due to Dongting lake flood in 1998 were selected through cluster random sampling. PTSD scale civilian version (PCL-C) was used to examine and diagnose the participants in this study. PTSD was then evaluated by the social support rating scale (SSRS) and the simple coping style questionnaire (SCSQ). Among all the 120 subjects, 14(11.67%) of them were diagnosed as having PTSD. Compared with the rehabilitation group, scores on subjective support, objective support, total social support and positive coping, total of coping style from the non-rehabilitation group all appeared significant low (P<0.05). Data from the multivariate logistic regression showed that social support (OR=0.281, 95% CI: 0.117-0.678) and coping style (OR= 0.293, 95% CI: 0.128-0.672) were protective factors of the chronic PTSD after the floods while disaster experience (OR=1.626, 95%CI: 1.118-2.365) appeared as a risk factor. Chronic PTSD developed after the floods called for attention. Better social support, positive coping style could significantly improve the long-term prognosis of patients with PTSD after the floods.

  20. Mean-state acceleration of cloud-resolving models and large eddy simulations

    DOE PAGES

    Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.

    2015-10-29

    In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate themore » evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.« less

  1. Simplified Large-Scale Refolding, Purification, and Characterization of Recombinant Human Granulocyte-Colony Stimulating Factor in Escherichia coli

    PubMed Central

    Kim, Chang Kyu; Lee, Chi Ho; Lee, Seung-Bae; Oh, Jae-Wook

    2013-01-01

    Granulocyte-colony stimulating factor (G-CSF) is a pleiotropic cytokine that stimulates the development of committed hematopoietic progenitor cells and enhances the functional activity of mature cells. Here, we report a simplified method for fed-batch culture as well as the purification of recombinant human (rh) G-CSF. The new system for rhG-CSF purification was performed using not only temperature shift strategy without isopropyl-l-thio-β-d-galactoside (IPTG) induction but also the purification method by a single step of prep-HPLC after the pH precipitation of the refolded samples. Through these processes, the final cell density and overall yield of homogenous rhG-CSF were obtained 42.8 g as dry cell weights, 1.75 g as purified active proteins, from 1 L culture broth, respectively. The purity of rhG-CSF was finally 99% since the isoforms of rhG-CSF could be separated through the prep-HPLC step. The result of biological activity indicated that purified rhG-CSF has a similar profile to the World Health Organization (WHO) 2nd International Standard for G-CSF. Taken together, our results demonstrate that the simple purification through a single step of prep-HPLC may be valuable for the industrial-scale production of biologically active proteins. PMID:24224041

  2. Validation of the Malay version of the Amsterdam Preoperative Anxiety and Information Scale (APAIS).

    PubMed

    Mohd Fahmi, Z; Lai, L L; Loh, P S

    2015-08-01

    Preoperative anxiety is a significant problem worldwide that may affect patients' surgical outcome. By using a simple and reliable tool such as the Amsterdam Preoperative Anxiety and Information Scale (APAIS), anaesthesiologists would be able to assess preoperative anxiety adequately and accurately. The purpose of this study was to develop and validate the Malay version of APAIS (Malay-APAIS), and assess the factors associated with higher anxiety scores. The authors performed forward and backward translation of APAIS into Malay and then tested on 200 patients in the anaesthetic clinic of University Malaya Medical Centre. Psychometric analysis was performed with factor analysis, internal consistency and correlation with Spielberger's State-Trait Anxiety Inventory (STAI-state). A good correlation was shown with STAI-state (r = 0.59). Anxiety and need for information both emerged with high internal consistency (Cronbach's alpha 0.93 and 0.90 respectively). Female gender, surgery with a higher risk and need for information were found to be associated with higher anxiety scores. On the other hand, previous experience with surgery had lower need for information. The Malay-APAIS is a valid and reliable tool for the assessment of patients' preoperative anxiety and their need for information. By understanding and measuring patient's concerns objectively, the perioperative management will improve to a much higher standard of care.

  3. Ice shelf fracture parameterization in an ice sheet model

    NASA Astrophysics Data System (ADS)

    Sun, Sainan; Cornford, Stephen L.; Moore, John C.; Gladstone, Rupert; Zhao, Liyun

    2017-11-01

    Floating ice shelves exert a stabilizing force onto the inland ice sheet. However, this buttressing effect is diminished by the fracture process, which on large scales effectively softens the ice, accelerating its flow, increasing calving, and potentially leading to ice shelf breakup. We add a continuum damage model (CDM) to the BISICLES ice sheet model, which is intended to model the localized opening of crevasses under stress, the transport of those crevasses through the ice sheet, and the coupling between crevasse depth and the ice flow field and to carry out idealized numerical experiments examining the broad impact on large-scale ice sheet and shelf dynamics. In each case we see a complex pattern of damage evolve over time, with an eventual loss of buttressing approximately equivalent to halving the thickness of the ice shelf. We find that it is possible to achieve a similar ice flow pattern using a simple rule of thumb: introducing an enhancement factor ˜ 10 everywhere in the model domain. However, spatially varying damage (or equivalently, enhancement factor) fields set at the start of prognostic calculations to match velocity observations, as is widely done in ice sheet simulations, ought to evolve in time, or grounding line retreat can be slowed by an order of magnitude.

  4. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  5. Use of healthcare a long time after severe burn injury; relation to perceived health and personality characteristics.

    PubMed

    Wikehult, B; Willebrand, M; Kildal, M; Lannerstam, K; Fugl-Meyer, A R; Ekselius, L; Gerdin, B

    2005-08-05

    The aim of the study was to evaluate which factors are associated with the use of healthcare a long time after severe burn injury. After a review process based on clinical reasoning, 69 former burn patients out of a consecutive group treated at the Uppsala Burn Unit from 1980--1995 were visited in their homes and their use of care and support was assessed in a semi-structured interview. Post-burn health was assessed with the Burn-Specific Health Scale-Brief (BSHS-B) and personality was assessed with the Swedish universities Scales of Personality (SSP). The participants were injured on average eight years previously. Thirty-four had current contact with healthcare due to their burn injury and had significantly lower scores on three BSHS-B-domains: Simple Abilities, Work and Hand function, and significantly higher scores for the SSP-domain Neuroticism and the SSP-scales Stress Susceptibility, Lack of Assertiveness, and lower scores for Social Desirability. There was no relation to age, gender, time since injury, length of stay, or to the surface area burned. A routine screening of personality traits as a supplement to long-term follow-ups may help in identifying the patient's need for care.

  6. Estimation and Validation of \\delta18O Global Distribution with Rayleigh-type two Dimensional Isotope Circulation Model

    NASA Astrophysics Data System (ADS)

    Yoshimura, K.; Oki, T.; Ohte, N.; Kanae, S.; Ichiyanagi, K.

    2004-12-01

    A simple water isotope circulation model on a global scale that includes a Rayleigh equation and the use of _grealistic_h external meteorological forcings estimates short-term variability of precipitation 18O. The results are validated by Global Network of Isotopes in Precipitation (GNIP) monthly observations and by daily observations at three sites in Thailand. This good agreement highlights the importance of large scale transport and mixing of vapor masses as a control factor for spatial and temporal variability of precipitation isotopes, rather than in-cloud micro processes. It also indicates the usefulness of the model and the isotopes observation databases for evaluation of two-dimensional atmospheric water circulation fields in forcing datasets. In this regard, two offline simulations for 1978-1993 with major reanalyses, i.e. NCEP and ERA15, were implemented, and the results show that, over Europe ERA15 better matched observations at both monthly and interannual time scales, mainly owing to better precipitation fields in ERA15, while in the tropics both produced similarly accurate isotopic fields. The isotope analyses diagnose accuracy of two-dimensional water circulation fields in datasets with a particular focus on precipitation processes.

  7. Dimensions of osteoarthritis self-management.

    PubMed

    Prior, Kirsty N; Bond, Malcolm J

    2004-06-01

    Our aims were to determine whether a taxonomy of self-management strategies for osteoarthritis could be identified, and whether the resultant dimensions of such a taxonomy demonstrate predictable relationships with health status indices. Participants (n = 117) from community-based self-help groups and a general rheumatology outpatient clinic completed a self-management inventory consisting of 11 items, answered for both the past 7 days and a day on which symptoms were worse than usual. Duration of symptoms, level of pain, perceived functional ability and self-rated health were recorded as indicators of health status. Three essentially identical factors were obtained for both past 7 days and worse day items. Resultant scales were labeled passive, complementary and active, respectively. Correlations with health status measures provided modest evidence for the construct validity of these self-management scales. Compared with a simple aggregate score based on the total number of strategies used, the scales provided a clearer understanding of the relationship between self-management and health. The study provided a useful extension to existing research, addressing a number of shortcomings identified by previous researchers. The identified self-management dimensions offered a greater insight into the self-management choices of patients. Suggestions for further improvements to the measurement of self-management are outlined.

  8. Connectivity in Agricultural Landscapes; do we Need More than a Dem?

    NASA Astrophysics Data System (ADS)

    Foster, I.; Boardman, J.; Favis-Mortlock, D.

    2017-12-01

    DEM's at a scale of metres to kilometres form the basis for many erosion models in part because data have long been available and published by national mapping agencies, such as the UK Ordnance Survey, and also because modelling gradient and flow pathways relative to topography is often simply executed within a GIS. That most landscape connectivity is not driven by topography is a simple issue that modellers appear reluctant to accept, or too challenging to model, yet there is an urgent need to rethink how landscapes function and what drives connectivity laterally and longitudinally at different spatial and temporal scales within agricultural landscapes. Landscape connectivity is driven by a combination of natural and anthropogenic factors that can enhance, reduce or eliminate connectivity at different timescales. In this paper we explore the use of a range of data sources that can be used to build a detailed picture of landscape connectivity at different scales. From a number of case studies we combine the use of maps, lidar data, field mapping, lake and floodplain coring fingerprinting and process monitoring to identify lateral and longitudinal connectivity and the way in which these have changed through time.

  9. From single cilia to collective waves in human airway ciliated tissues

    NASA Astrophysics Data System (ADS)

    Cicuta, Pietro; Chioccioli, Maurizio; Feriani, Luigi; Pellicciotta, Nicola; Kotar, Jurij

    I will present experimental results on activity of motile cilia on various scales: from waveforms on individual cilia to the synchronised motion in cilia carpets of airway cells. Model synthetic experiments have given us an understanding of how cilia could couple with each other through forces transmitted by the fluid, and thus coordinate to beat into well organized waves (previous work is reviewed in Annu. Rev. Condens. Matter Phys. 7, 1-26 (2016)). Working with live imaging of airway human cells at the different scales, we can now test whether the biological system satisfies the ``simple'' behavior expected of the fluid flow coupling, or if other factors of mechanical forces transmission need to be accounted for. In general being able to link from the scale of molecular biological activity up to the phenomenology of collective dynamics requires to understand the relevant physical mechanism. This understanding then allows informed diagnostics (and perhaps therapeutic) approaches to a variety of diseases where mucociliary clearance in the airways is compromised. We have started exploring particularly cystic fibrosis, where the rheological properties of the mucus are affected and prevent efficient cilia synchronization. ERC Grant HydroSync.

  10. Use of satellite and modeled soil moisture data for predicting event soil loss at plot scale

    NASA Astrophysics Data System (ADS)

    Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.

    2015-09-01

    The potential of coupling soil moisture and a Universal Soil Loss Equation-based (USLE-based) model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e., the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the USLE enhances the capability of the model to account for variations in event soil losses, the soil moisture being an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to ~ 0.35 and a root mean square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.

  11. Use of satellite and modelled soil moisture data for predicting event soil loss at plot scale

    NASA Astrophysics Data System (ADS)

    Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.

    2015-03-01

    The potential of coupling soil moisture and a~USLE-based model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in Central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e. the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the RUSLE/USLE, enhances the capability of the model to account for variations in event soil losses, being the soil moisture an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to of ~ 0.35 and a root-mean-square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.

  12. Spectral scaling of the aftershocks of the Tocopilla 2007 earthquake in northern Chile

    NASA Astrophysics Data System (ADS)

    Lancieri, M.; Madariaga, R.; Bonilla, F.

    2012-04-01

    We study the scaling of spectral properties of a set of 68 aftershocks of the 2007 November 14 Tocopilla (M 7.8) earthquake in northern Chile. These are all subduction events with similar reverse faulting focal mechanism that were recorded by a homogenous network of continuously recording strong motion instruments. The seismic moment and the corner frequency are obtained assuming that the aftershocks satisfy an inverse omega-square spectral decay; radiated energy is computed integrating the square velocity spectrum corrected for attenuation at high frequencies and for the finite bandwidth effect. Using a graphical approach, we test the scaling of seismic spectrum, and the scale invariance of the apparent stress drop with the earthquake size. To test whether the Tocopilla aftershocks scale with a single parameter, we introduce a non-dimensional number, ?, that should be constant if earthquakes are self-similar. For the Tocopilla aftershocks, Cr varies by a factor of 2. More interestingly, Cr for the aftershocks is close to 2, the value that is expected for events that are approximately modelled by a circular crack. Thus, in spite of obvious differences in waveforms, the aftershocks of the Tocopilla earthquake are self-similar. The main shock is different because its records contain large near-field waves. Finally, we investigate the scaling of energy release rate, Gc, with the slip. We estimated Gc from our previous estimates of the source parameters, assuming a simple circular crack model. We find that Gc values scale with the slip, and are in good agreement with those found by Abercrombie and Rice for the Northridge aftershocks.

  13. Construction and validation of a measure of integrative well-being in seven languages: The Pemberton Happiness Index

    PubMed Central

    2013-01-01

    Purpose We introduce the Pemberton Happiness Index (PHI), a new integrative measure of well-being in seven languages, detailing the validation process and presenting psychometric data. The scale includes eleven items related to different domains of remembered well-being (general, hedonic, eudaimonic, and social well-being) and ten items related to experienced well-being (i.e., positive and negative emotional events that possibly happened the day before); the sum of these items produces a combined well-being index. Methods A distinctive characteristic of this study is that to construct the scale, an initial pool of items, covering the remembered and experienced well-being domains, were subjected to a complete selection and validation process. These items were based on widely used scales (e.g., PANAS, Satisfaction With Life Scale, Subjective Happiness Scale, and Psychological Well-Being Scales). Both the initial items and reference scales were translated into seven languages and completed via Internet by participants (N = 4,052) aged 16 to 60 years from nine countries (Germany, India, Japan, Mexico, Russia, Spain, Sweden, Turkey, and USA). Results Results from this initial validation study provided very good support for the psychometric properties of the PHI (i.e., internal consistency, a single-factor structure, and convergent and incremental validity). Conclusions Given the PHI’s good psychometric properties, this simple and integrative index could be used as an instrument to monitor changes in well-being. We discuss the utility of this integrative index to explore well-being in individuals and communities. PMID:23607679

  14. On determinant representations of scalar products and form factors in the SoV approach: the XXX case

    NASA Astrophysics Data System (ADS)

    Kitanine, N.; Maillet, J. M.; Niccoli, G.; Terras, V.

    2016-03-01

    In the present article we study the form factors of quantum integrable lattice models solvable by the separation of variables (SoVs) method. It was recently shown that these models admit universal determinant representations for the scalar products of the so-called separate states (a class which includes in particular all the eigenstates of the transfer matrix). These results permit to obtain simple expressions for the matrix elements of local operators (form factors). However, these representations have been obtained up to now only for the completely inhomogeneous versions of the lattice models considered. In this article we give a simple algebraic procedure to rewrite the scalar products (and hence the form factors) for the SoV related models as Izergin or Slavnov type determinants. This new form leads to simple expressions for the form factors in the homogeneous and thermodynamic limits. To make the presentation of our method clear, we have chosen to explain it first for the simple case of the XXX Heisenberg chain with anti-periodic boundary conditions. We would nevertheless like to stress that the approach presented in this article applies as well to a wide range of models solved in the SoV framework.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunal, K.; Aluru, N. R., E-mail: aluru@illinois.edu

    We investigate the effect of size on intrinsic dissipation in nano-structures. We use molecular dynamics simulation and study dissipation under two different modes of deformation: stretching and bending mode. In the case of stretching deformation (with uniform strain field), dissipation takes place due to Akhiezer mechanism. For bending deformation, in addition to the Akhiezer mechanism, the spatial temperature gradient also plays a role in the process of entropy generation. Interestingly, we find that the bending modes have a higher Q factor in comparison with the stretching deformation (under the same frequency of operation). Furthermore, with the decrease in size, themore » difference in Q factor between the bending and stretching deformation becomes more pronounced. The lower dissipation for the case of bending deformation is explained to be due to the surface scattering of phonons. A simple model, for phonon dynamics under an oscillating strain field, is considered to explain the observed variation in dissipation rate. We also studied the scaling of Q factor with initial tension, in a beam under flexure. We develop a continuum theory to explain the observed results.« less

  16. Causation mechanism analysis for haze pollution related to vehicle emission in Guangzhou, China by employing the fault tree approach.

    PubMed

    Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu

    2016-05-01

    Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Impact of varying area of polluting surface materials on perceived air quality.

    PubMed

    Sakr, W; Knudsen, H N; Gunnarsen, L; Haghighat, F

    2003-06-01

    A laboratory study was performed to investigate the impact of the concentration of pollutants in the air on emissions from building materials. Building materials were placed in ventilated test chambers. The experimental set-up allowed the concentration of pollution in the exhaust air to be changed either by diluting exhaust air with clean air (changing the dilution factor) or by varying the area of the material inside the chamber when keeping the ventilation rate constant (changing the area factor). Four different building materials and three combinations of two or three building materials were studied in ventilated small-scale test chambers. Each individual material and three of their combinations were examined at four different dilution factors and four different area factors. An untrained panel of 23 subjects assessed the air quality from the chambers. The results show that a certain increase in dilution improves the perceived air quality more than a similar decrease in area. The reason for this may be that the emission rate of odorous pollutants increases when the concentration in the chamber decreases. The results demonstrate that, in some cases the effect of increased ventilation on the air quality may be less than expected from a simple dilution model.

  18. Challenges in converting among log scaling methods.

    Treesearch

    Henry Spelter

    2003-01-01

    The traditional method of measuring log volume in North America is the board foot log scale, which uses simple assumptions about how much of a log's volume is recoverable. This underestimates the true recovery potential and leads to difficulties in comparing volumes measured with the traditional board foot system and those measured with the cubic scaling systems...

  19. Electrochemistry at Nanometer-Scaled Electrodes

    ERIC Educational Resources Information Center

    Watkins, John J.; Bo Zhang; White, Henry S.

    2005-01-01

    Electrochemical studies using nanometer-scaled electrodes are leading to better insights into electrochemical kinetics, interfacial structure, and chemical analysis. Various methods of preparing electrodes of nanometer dimensions are discussed and a few examples of their behavior and applications in relatively simple electrochemical experiments…

  20. An extinction scale-expansion unit for the Beckman DK2 spectrophotometer

    PubMed Central

    Dixon, M.

    1967-01-01

    The paper describes a simple but accurate unit for the Beckman DK2 recording spectrophotometer, whereby any 0·1 section of the extinction (`absorbance') scale may be expanded tenfold, while preserving complete linearity in extinction. PMID:6048800

  1. A simple predictive model for the structure of the oceanic pycnocline

    PubMed

    Gnanadesikan

    1999-03-26

    A simple theory for the large-scale oceanic circulation is developed, relating pycnocline depth, Northern Hemisphere sinking, and low-latitude upwelling to pycnocline diffusivity and Southern Ocean winds and eddies. The results show that Southern Ocean processes help maintain the global ocean structure and that pycnocline diffusion controls low-latitude upwelling.

  2. Pasta production: complexity in defining processing conditions for reference trials and quality assessment models

    USDA-ARS?s Scientific Manuscript database

    Pasta is a simple food made from water and durum wheat (Triticum turgidum subsp. durum) semolina. As pasta increases in popularity, studies have endeavored to analyze the attributes that contribute to high quality pasta. Despite being a simple food, the laboratory scale analysis of pasta quality is ...

  3. A Simple, Small-Scale Lego Colorimeter with a Light-Emitting Diode (LED) Used as Detector

    ERIC Educational Resources Information Center

    Asheim, Jonas; Kvittingen, Eivind V.; Kvittingen, Lise; Verley, Richard

    2014-01-01

    This article describes how to construct a simple, inexpensive, and robust colorimeter from a few Lego bricks, in which one light-emitting diode (LED) is used as a light source and a second LED as a light detector. The colorimeter is suited to various grades and curricula.

  4. Constructing the tree-level Yang-Mills S-matrix using complex factorization

    NASA Astrophysics Data System (ADS)

    Schuster, Philip C.; Toro, Natalia

    2009-06-01

    A remarkable connection between BCFW recursion relations and constraints on the S-matrix was made by Benincasa and Cachazo in 0705.4305, who noted that mutual consistency of different BCFW constructions of four-particle amplitudes generates non-trivial (but familiar) constraints on three-particle coupling constants — these include gauge invariance, the equivalence principle, and the lack of non-trivial couplings for spins > 2. These constraints can also be derived with weaker assumptions, by demanding the existence of four-point amplitudes that factorize properly in all unitarity limits with complex momenta. From this starting point, we show that the BCFW prescription can be interpreted as an algorithm for fully constructing a tree-level S-matrix, and that complex factorization of general BCFW amplitudes follows from the factorization of four-particle amplitudes. The allowed set of BCFW deformations is identified, formulated entirely as a statement on the three-particle sector, and using only complex factorization as a guide. Consequently, our analysis based on the physical consistency of the S-matrix is entirely independent of field theory. We analyze the case of pure Yang-Mills, and outline a proof for gravity. For Yang-Mills, we also show that the well-known scaling behavior of BCFW-deformed amplitudes at large z is a simple consequence of factorization. For gravity, factorization in certain channels requires asymptotic behavior ~ 1/z2.

  5. Sensitivity of echo enabled harmonic generation to sinusoidal electron beam energy structure

    DOE PAGES

    Hemsing, E.; Garcia, B.; Huang, Z.; ...

    2017-06-19

    Here, we analytically examine the bunching factor spectrum of a relativistic electron beam with sinusoidal energy structure that then undergoes an echo-enabled harmonic generation (EEHG) transformation to produce high harmonics. The performance is found to be described primarily by a simple scaling parameter. The dependence of the bunching amplitude on fluctuations of critical parameters is derived analytically, and compared with simulations. Where applicable, EEHG is also compared with high gain harmonic generation (HGHG) and we find that EEHG is generally less sensitive to several types of energy structure. In the presence of intermediate frequency modulations like those produced by themore » microbunching instability, EEHG has a substantially narrower intrinsic bunching pedestal.« less

  6. Power scaling limits in high power fiber amplifiers due to transverse mode instability, thermal lensing, and fiber mechanical reliability

    NASA Astrophysics Data System (ADS)

    Zervas, Michalis N.

    2018-02-01

    We introduced a simple formula providing the mode-field diameter shrinkage, due to heat load in fiber amplifiers, and used it to compare the traditional thermal-lensing power limit (PTL) to a newly developed transverse-mode instability (TMI) power limit (PTMI), giving a fixed ratio of PTMI/PTL≍0.6, in very good agreement with experiment. Using a failure-in-time analysis we also introduced a new power limiting factor due to mechanical reliability of bent fibers. For diode (tandem) pumping power limits of 28kW (52kW) are predicted. Setting a practical limit of maximum core diameter to 35μm, the limits reduce to 15kW (25kW).

  7. The internal dynamics of slowly rotating biological systems

    NASA Technical Reports Server (NTRS)

    Kessler, John O.

    1992-01-01

    The structure and the dynamics of biological systems are complex. Steady gravitational forces that act on organisms cause hydrostatic pressure gradients, stress in solid components, and ordering of movable subsystems according to density. Rotation induces internal motion; it also stresses and or deforms regions of attachment and containment. The disrupted gravitationally ordered layers of movable entities are replaced by their orbital movements. New ordering geometries may arise also, especially if fluids of various densities occur. One novel result obtained concerns the application of scheduled variation of clinostat rotation rates to the management of intracellular particle trajectories. Rotation and its consequences are discussed in terms of scaling factors for parameters such as time, derived from mathematical models for simple rotating mechanical systems.

  8. Open Source GIS Connectors to NASA GES DISC Satellite Data

    NASA Technical Reports Server (NTRS)

    Kempler, Steve; Pham, Long; Yang, Wenli

    2014-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) houses a suite of high spatiotemporal resolution GIS data including satellite-derived and modeled precipitation, air quality, and land surface parameter data. The data are valuable to various GIS research and applications at regional, continental, and global scales. On the other hand, many GIS users, especially those from the ArcGIS community, have difficulties in obtaining, importing, and using our data due to factors such as the variety of data products, the complexity of satellite remote sensing data, and the data encoding formats. We introduce a simple open source ArcGIS data connector that significantly simplifies the access and use of GES DISC data in ArcGIS.

  9. Dynamical Aspects of Quasifission Process in Heavy-Ion Reactions

    NASA Astrophysics Data System (ADS)

    Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.

    2015-06-01

    The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. For all the reactions the main component of the distributions corresponds to asymmetrical mass division typical for asymmetric quasifission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.

  10. Towards the computation of time-periodic inertial range dynamics

    NASA Astrophysics Data System (ADS)

    van Veen, L.; Vela-Martín, A.; Kawahara, G.

    2018-04-01

    We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.

  11. Optical identification using imperfections in 2D materials

    NASA Astrophysics Data System (ADS)

    Cao, Yameng; Robson, Alexander J.; Alharbi, Abdullah; Roberts, Jonathan; Woodhead, Christopher S.; Noori, Yasir J.; Bernardo-Gavito, Ramón; Shahrjerdi, Davood; Roedig, Utz; Fal'ko, Vladimir I.; Young, Robert J.

    2017-12-01

    The ability to uniquely identify an object or device is important for authentication. Imperfections, locked into structures during fabrication, can be used to provide a fingerprint that is challenging to reproduce. In this paper, we propose a simple optical technique to read unique information from nanometer-scale defects in 2D materials. Imperfections created during crystal growth or fabrication lead to spatial variations in the bandgap of 2D materials that can be characterized through photoluminescence measurements. We show a simple setup involving an angle-adjustable transmission filter, simple optics and a CCD camera can capture spatially-dependent photoluminescence to produce complex maps of unique information from 2D monolayers. Atomic force microscopy is used to verify the origin of the optical signature measured, demonstrating that it results from nanometer-scale imperfections. This solution to optical identification with 2D materials could be employed as a robust security measure to prevent counterfeiting.

  12. Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies

    PubMed Central

    Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.

    2009-01-01

    Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601

  13. A simple and low-cost platform technology for producing pexiganan antimicrobial peptide in E. coli.

    PubMed

    Zhao, Chun-Xia; Dwyer, Mirjana Dimitrijev; Yu, Alice Lei; Wu, Yang; Fang, Sheng; Middelberg, Anton P J

    2015-05-01

    Antimicrobial peptides, as a new class of antibiotics, have generated tremendous interest as potential alternatives to classical antibiotics. However, the large-scale production of antimicrobial peptides remains a significant challenge. This paper reports a simple and low-cost chromatography-free platform technology for producing antimicrobial peptides in Escherichia coli (E. coli). A fusion protein comprising a variant of the helical biosurfactant protein DAMP4 and the known antimicrobial peptide pexiganan is designed by joining the two polypeptides, at the DNA level, via an acid-sensitive cleavage site. The resulting DAMP4(var)-pexiganan fusion protein expresses at high level and solubility in recombinant E. coli, and a simple heat-purification method was applied to disrupt cells and deliver high-purity DAMP4(var)-pexiganan protein. Simple acid cleavage successfully separated the DAMP4 variant protein and the antimicrobial peptide. Antimicrobial activity tests confirmed that the bio-produced antimicrobial peptide has the same antimicrobial activity as the equivalent product made by conventional chemical peptide synthesis. This simple and low-cost platform technology can be easily adapted to produce other valuable peptide products, and opens a new manufacturing approach for producing antimicrobial peptides at large scale using the tools and approaches of biochemical engineering. © 2014 Wiley Periodicals, Inc.

  14. Statistical self-similarity of width function maxima with implications to floods

    USGS Publications Warehouse

    Veitzer, S.A.; Gupta, V.K.

    2001-01-01

    Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.

  15. Acoustic Treatment Design Scaling Methods. Volume 2; Advanced Treatment Impedance Models for High Frequency Ranges

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.; Yu, J.; Kwan, H. W.

    1999-01-01

    The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.

  16. Impact force as a scaling parameter

    NASA Technical Reports Server (NTRS)

    Poe, Clarence C., Jr.; Jackson, Wade C.

    1994-01-01

    The Federal Aviation Administration (FAR PART 25) requires that a structure carry ultimate load with nonvisible impact damage and carry 70 percent of limit flight loads with discrete damage. The Air Force has similar criteria (MIL-STD-1530A). Both civilian and military structures are designed by a building block approach. First, critical areas of the structure are determined, and potential failure modes are identified. Then, a series of representative specimens are tested that will fail in those modes. The series begins with tests of simple coupons, progresses through larger and more complex subcomponents, and ends with a test on a full-scale component, hence the term 'building block.' In order to minimize testing, analytical models are needed to scale impact damage and residual strength from the simple coupons to the full-scale component. Using experiments and analysis, the present paper illustrates that impact damage can be better understood and scaled using impact force than just kinetic energy. The plate parameters considered are size and thickness, boundary conditions, and material, and the impact parameters are mass, shape, and velocity.

  17. On identifying relationships between the flood scaling exponent and basin attributes.

    PubMed

    Medhi, Hemanta; Tripathi, Shivam

    2015-07-01

    Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.

  18. An avian model for the reversal of neurobehavioral teratogenicity with neural stem cells

    PubMed Central

    Dotan, Sharon; Pinkas, Adi; Slotkin, Theodore A.; Yanai, Joseph

    2010-01-01

    A fast and simple model which uses lower animals on the evolutionary scale is beneficial for developing procedures for the reversal of neurobehavioral teratogenicity with neural stem cells. Here, we established a procedure for the derivation of chick neural stem cells, establishing embryonic day (E) 10 as optimal for progression to neuronal phenotypes. Cells were obtained from the embryonic cerebral hemispheres and incubated for 5–7 days in enriched medium containing epidermal growth factor (EGF) and basic fibroblast growth factor (FGF2) according to a procedure originally developed for mice. A small percentage of the cells survived, proliferated and formed nestin-positive neurospheres. After removal of the growth factors to allow differentiation (5 days), 74% of the cells differentiated into all major lineages of the nervous system, including neurons (Beta III tubulin-positive, 54% of the total number of differentiated cells), astrocytes (GFAP-positive, 26%), and oligodendrocytes (O4-positive, 20%). These findings demonstrate that the cells were indeed neural stem cells. Next, the cells were transplanted in two allograft chick models; (1) direct cerebral transplantation to 24-hours-old chicks, followed by post-transplantation cell tracking at 24 hours, 6 days and 14 days, and (2) intravenous transplantation to chick embryos on E13, followed by cell tracking on E19. With both methods, transplanted cells were found in the brain. The chick embryo provides a convenient, precisely-timed and unlimited supply of neural progenitors for therapy by transplantation, as well as constituting a fast and simple model in which to evaluate the ability of neural stem cell transplantation to repair neural damage, steps that are critical for progress toward therapeutic applications. PMID:20211723

  19. Simple estimation of Förster Resonance Energy Transfer (FRET) orientation factor distribution in membranes.

    PubMed

    Loura, Luís M S

    2012-11-19

    Because of its acute sensitivity to distance in the nanometer scale, Förster resonance energy transfer (FRET) has found a large variety of applications in many fields of chemistry, physics, and biology. One important issue regarding the correct usage of FRET is its dependence on the donor-acceptor relative orientation, expressed as the orientation factor k(2). Different donor/acceptor conformations can lead to k(2) values in the 0 ≤ k(2) ≤ 4 range. Because the characteristic distance for FRET, R(0), is proportional to (k(2))1/6, uncertainties in the orientation factor are reflected in the quality of information that can be retrieved from a FRET experiment. In most cases, the average value of k(2) corresponding to the dynamic isotropic limit ( = 2/3) is used for computation of R(0) and hence donor-acceptor distances and acceptor concentrations. However, this can lead to significant error in unfavorable cases. This issue is more critical in membrane systems, because of their intrinsically anisotropic nature and their reduced fluidity in comparison to most common solvents. Here, a simple numerical simulation method for estimation of the probability density function of k(2) for membrane-embedded donor and acceptor fluorophores in the dynamic regime is presented. In the simplest form, the proposed procedure uses as input the most probable orientations of the donor and acceptor transition dipoles, obtained by experimental (including linear dichroism) or theoretical (such as molecular dynamics simulation) techniques. Optionally, information about the widths of the donor and/or acceptor angular distributions may be incorporated. The methodology is illustrated for special limiting cases and common membrane FRET pairs.

  20. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    PubMed

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.

  1. Job-Seeking Stress, Mental Health Problems, and the Role of Perceived Social Support in University Graduates in Korea.

    PubMed

    Lim, Ah Young; Lee, Seung-Hee; Jeon, Yeongju; Yoo, Rankyung; Jung, Hee-Yeon

    2018-05-07

    Increases in unemployment and suicide in the young Korean population have recently become major social concerns in the country. The purpose of this study was to examine mental health status in young job seekers and identify sociodemographic factors related to job-seeking stress, depression, and suicidal ideation. We also explored the mediating effect of depression on the relationship between job-seeking stress and suicidal ideation and examined whether social support moderated this effect. In total, 124 university graduates completed the Job-Seeking Stress Scale, Beck Depression Inventory-II, Beck Scale for Suicide Ideation, and Multidimensional Scale of Perceived Social Support. Descriptive statistics were calculated for participants' general characteristics, and t-tests or analyses of variance, correlation analysis, simple mediation analysis, and mediated moderation analysis were performed. Of the 124 participants, 39.5% and 15.3% exhibited clinical levels of depression and suicidal ideation, respectively. Sociodemographic factors (i.e., sex, academic major, educational expenses loan, and willingness to accept irregular employment) were associated with job-seeking stress, depression, and suicidal ideation. Women and graduates who were willing to accept irregular employment exhibited high levels of job-seeking stress, depression, and suicidal ideation. Job-seeking stress affected suicidal ideation via depression, and perceived social support moderated the effect of job-seeking stress on depression and the effect of depression on suicidal ideation. The results suggest that depression management and interventions are urgently required for young job seekers, and social support should be provided to assist them both emotionally and economically.

  2. Grain Yield Observations Constrain Cropland CO2 Fluxes Over Europe

    NASA Astrophysics Data System (ADS)

    Combe, M.; de Wit, A. J. W.; Vilà-Guerau de Arellano, J.; van der Molen, M. K.; Magliulo, V.; Peters, W.

    2017-12-01

    Carbon exchange over croplands plays an important role in the European carbon cycle over daily to seasonal time scales. A better description of this exchange in terrestrial biosphere models—most of which currently treat crops as unmanaged grasslands—is needed to improve atmospheric CO2 simulations. In the framework we present here, we model gross European cropland CO2 fluxes with a crop growth model constrained by grain yield observations. Our approach follows a two-step procedure. In the first step, we calculate day-to-day crop carbon fluxes and pools with the WOrld FOod STudies (WOFOST) model. A scaling factor of crop growth is optimized regionally by minimizing the final grain carbon pool difference to crop yield observations from the Statistical Office of the European Union. In a second step, we re-run our WOFOST model for the full European 25 × 25 km gridded domain using the optimized scaling factors. We combine our optimized crop CO2 fluxes with a simple soil respiration model to obtain the net cropland CO2 exchange. We assess our model's ability to represent cropland CO2 exchange using 40 years of observations at seven European FluxNet sites and compare it with carbon fluxes produced by a typical terrestrial biosphere model. We conclude that our new model framework provides a more realistic and strongly observation-driven estimate of carbon exchange over European croplands. Its products will be made available to the scientific community through the ICOS Carbon Portal and serve as a new cropland component in the CarbonTracker Europe inverse model.

  3. Job-Seeking Stress, Mental Health Problems, and the Role of Perceived Social Support in University Graduates in Korea

    PubMed Central

    2018-01-01

    Background Increases in unemployment and suicide in the young Korean population have recently become major social concerns in the country. The purpose of this study was to examine mental health status in young job seekers and identify sociodemographic factors related to job-seeking stress, depression, and suicidal ideation. We also explored the mediating effect of depression on the relationship between job-seeking stress and suicidal ideation and examined whether social support moderated this effect. Methods In total, 124 university graduates completed the Job-Seeking Stress Scale, Beck Depression Inventory-II, Beck Scale for Suicide Ideation, and Multidimensional Scale of Perceived Social Support. Descriptive statistics were calculated for participants' general characteristics, and t-tests or analyses of variance, correlation analysis, simple mediation analysis, and mediated moderation analysis were performed. Results Of the 124 participants, 39.5% and 15.3% exhibited clinical levels of depression and suicidal ideation, respectively. Sociodemographic factors (i.e., sex, academic major, educational expenses loan, and willingness to accept irregular employment) were associated with job-seeking stress, depression, and suicidal ideation. Women and graduates who were willing to accept irregular employment exhibited high levels of job-seeking stress, depression, and suicidal ideation. Job-seeking stress affected suicidal ideation via depression, and perceived social support moderated the effect of job-seeking stress on depression and the effect of depression on suicidal ideation. Conclusion The results suggest that depression management and interventions are urgently required for young job seekers, and social support should be provided to assist them both emotionally and economically. PMID:29736162

  4. Experimental Investigation of the Flow on a Simple Frigate Shape (SFS)

    PubMed Central

    Mora, Rafael Bardera

    2014-01-01

    Helicopters operations on board ships require special procedures introducing additional limitations known as ship helicopter operational limitations (SHOLs) which are a priority for all navies. This paper presents the main results obtained from the experimental investigation of a simple frigate shape (SFS) which is a typical case of study in experimental and computational aerodynamics. The results obtained in this investigation are used to make an assessment of the flow predicted by the SFS geometry in comparison with experimental data obtained testing a ship model (reduced scale) in the wind tunnel and on board (full scale) measurements performed on a real frigate type ship geometry. PMID:24523646

  5. Rapid and simple procedure for homogenizing leaf tissues suitable for mini-midi-scale DNA extraction in rice.

    PubMed

    Yi, Gihwan; Choi, Jun-Ho; Lee, Jong-Hee; Jeong, Unggi; Nam, Min-Hee; Yun, Doh-Won; Eun, Moo-Young

    2005-01-01

    We describe a rapid and simple procedure for homogenizing leaf samples suitable for mini/midi-scale DNA preparation in rice. The methods used tungsten carbide beads and general vortexer for homogenizing leaf samples. In general, two samples can be ground completely within 11.3+/-1.5 sec at one time. Up to 20 samples can be ground at a time using a vortexer attachment. The yields of the DNA ranged from 2.2 to 7.6 microg from 25-150 mg of young fresh leaf tissue. The quality and quantity of DNA was compatible for most of PCR work and RFLP analysis.

  6. Estimation of critical behavior from the density of states in classical statistical models

    NASA Astrophysics Data System (ADS)

    Malakis, A.; Peratzakis, A.; Fytas, N. G.

    2004-12-01

    We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.

  7. Dynamic screening in a two-species asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Hyuk; den Nijs, Marcel

    2007-08-01

    The dynamic scaling properties of the one-dimensional Burgers equation are expected to change with the inclusion of additional conserved degrees of freedom. We study this by means of one-dimensional (1D) driven lattice gas models that conserve both mass and momentum. The most elementary version of this is the Arndt-Heinzel-Rittenberg (AHR) process, which is usually presented as a two-species diffusion process, with particles of opposite charge hopping in opposite directions and with a variable passing probability. From the hydrodynamics perspective this can be viewed as two coupled Burgers equations, with the number of positive and negative momentum quanta individually conserved. We determine the dynamic scaling dimension of the AHR process from the time evolution of the two-point correlation functions, and find numerically that the dynamic critical exponent is consistent with simple Kardar-Parisi-Zhang- (KPZ) type scaling. We establish that this is the result of perfect screening of fluctuations in the stationary state. The two-point correlations decay exponentially in our simulations and in such a manner that in terms of quasiparticles, fluctuations fully screen each other at coarse grained length scales. We prove this screening rigorously using the analytic matrix product structure of the stationary state. The proof suggests the existence of a topological invariant. The process remains in the KPZ universality class but only in the sense of a factorization, as (KPZ)2 . The two Burgers equations decouple at large length scales due to the perfect screening.

  8. The Impact of Nonequilibrium and Equilibrium Fractionation on Two Different Deuterium Excess Definitions

    NASA Astrophysics Data System (ADS)

    Dütsch, Marina; Pfahl, Stephan; Sodemann, Harald

    2017-12-01

    The deuterium excess (d) is a useful measure for nonequilibrium effects of isotopic fractionation and can therefore provide information about the meteorological conditions in evaporation regions or during ice cloud formation. In addition to nonequilibrium fractionation, two other effects can change d during phase transitions. The first is the dependence of the equilibrium fractionation factors on temperature, and the second is the nonlinearity of the δ scale on which d is defined. The second effect can be avoided by using an alternative definition that is based on the logarithmic scale. However, in this case d is not conserved when air parcels mix, which can lead to changes without phase transitions. Here we provide a systematic analysis of the benefits and limitations of both deuterium excess definitions by separately quantifying the impact of the nonequilibrium effect, the temperature effect, the δ-scale effect, and the mixing effect in a simple Rayleigh model simulating the isotopic composition of air parcels during moist adiabatic ascent. The δ-scale effect is important in depleted air parcels, for which it can change the sign of the traditional deuterium excess in the remaining vapor from negative to positive. The alternative definition mainly reflects the nonequilibrium and temperature effect, while the mixing effect is about 2 orders of magnitude smaller. Thus, the alternative deuterium excess definition appears to be a more accurate measure for nonequilibrium effects in situations where moisture is depleted and the δ-scale effect is large, for instance, at high latitudes or altitudes.

  9. A Note on the Fractal Behavior of Hydraulic Conductivity and Effective Porosity for Experimental Values in a Confined Aquifer

    PubMed Central

    De Bartolo, Samuele; Fallico, Carmine; Veltri, Massimo

    2013-01-01

    Hydraulic conductivity and effective porosity values for the confined sandy loam aquifer of the Montalto Uffugo (Italy) test field were obtained by laboratory and field measurements; the first ones were carried out on undisturbed soil samples and the others by slug and aquifer tests. A direct simple-scaling analysis was performed for the whole range of measurement and a comparison among the different types of fractal models describing the scale behavior was made. Some indications about the largest pore size to utilize in the fractal models were given. The results obtained for a sandy loam soil show that it is possible to obtain global indications on the behavior of the hydraulic conductivity versus the porosity utilizing a simple scaling relation and a fractal model in coupled manner. PMID:24385876

  10. Factors affecting residency rank-listing: a Maxdiff survey of graduating Canadian medical students.

    PubMed

    Wang, Tao; Wong, Benson; Huang, Alexander; Khatri, Prateek; Ng, Carly; Forgie, Melissa; Lanphear, Joel H; O'Neill, Peter J

    2011-08-25

    In Canada, graduating medical students consider many factors, including geographic, social, and academic, when ranking residency programs through the Canadian Residency Matching Service (CaRMS). The relative significance of these factors is poorly studied in Canada. It is also unknown how students differentiate between their top program choices. This survey study addresses the influence of various factors on applicant decision making. Graduating medical students from all six Ontario medical schools were invited to participate in an online survey available for three weeks prior to the CaRMS match day in 2010. Max-Diff discrete choice scaling, multiple choice, and drop-list style questions were employed. The Max-Diff data was analyzed using a scaled simple count method. Data for how students distinguish between top programs was analyzed as percentages. Comparisons were made between male and female applicants as well as between family medicine and specialist applicants; statistical significance was determined by the Mann-Whitney test. In total, 339 of 819 (41.4%) eligible students responded. The variety of clinical experiences and resident morale were weighed heavily in choosing a residency program; whereas financial incentives and parental leave attitudes had low influence. Major reasons that applicants selected their first choice program over their second choice included the distance to relatives and desirability of the city. Both genders had similar priorities when selecting programs. Family medicine applicants rated the variety of clinical experiences more importantly; whereas specialty applicants emphasized academic factors more. Graduating medical students consider program characteristics such as the variety of clinical experiences and resident morale heavily in terms of overall priority. However, differentiation between their top two choice programs is often dependent on social/geographic factors. The results of this survey will contribute to a better understanding of the CaRMS decision making process for both junior medical students and residency program directors.

  11. Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it

    2012-12-01

    A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less

  12. Self-organized criticality in asymmetric exclusion model with noise for freeway traffic

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-02-01

    The one-dimensional asymmetric simple-exclusion model with open boundaries for parallel update is extended to take into account temporary stopping of particles. The model presents the traffic flow on a highway with temporary deceleration of cars. Introducing temporary stopping into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. In the self-organized critical state, start-stop waves (or traffic jams) appear with various sizes (or lifetimes). The typical interval < s>between consecutive jams scales as < s> ≃ Lv with v = 0.51 ± 0.05 where L is the system size. It is shown that the cumulative jam-interval distribution Ns( L) satisfies the finite-size scaling form ( Ns( L) ≃ L- vf( s/ Lv). Also, the typical lifetime ≃ Lv‧ with v‧ = 0.52 ± 0.05. The cumulative distribution Nm( L) of lifetimes satisfies the finite-size scaling form Nm( L)≃ L-1g( m/ Lv‧).

  13. Use of a Cutaneous Body Image (CBI) scale to evaluate self perception of body image in acne vulgaris.

    PubMed

    Amr, Mostafa; Kaliyadan, Feroze; Shams, Tarek

    2014-01-01

    Skin disorders such as acne, which have significant cosmetic implications, can affect the self-perception of cutaneous body image. There are many scales which measure self-perception of cutaneous body image. We evaluated the use of a simple Cutaneous Body Image (CBI) scale to assess self-perception of body image in a sample of young Arab patients affected with acne. A total of 70 patients with acne answered the CBI questionnaire. The CBI score was correlated with the severity of acne and acne scarring, gender, and history of retinoids use. There was no statistically significant correlation between CBI and the other parameters - gender, acne/acne scarring severity, and use of retinoids. Our study suggests that cutaneous body image perception in Arab patients with acne was not dependent on variables like gender and severity of acne or acne scarring. A simple CBI scale alone is not a sufficiently reliable tool to assess self-perception of body image in patients with acne vulgaris.

  14. A simple scaling law for the equation of state and the radial distribution functions calculated by density-functional theory molecular dynamics

    NASA Astrophysics Data System (ADS)

    Danel, J.-F.; Kazandjian, L.

    2018-06-01

    It is shown that the equation of state (EOS) and the radial distribution functions obtained by density-functional theory molecular dynamics (DFT-MD) obey a simple scaling law. At given temperature, the thermodynamic properties and the radial distribution functions given by a DFT-MD simulation remain unchanged if the mole fractions of nuclei of given charge and the average volume per atom remain unchanged. A practical interest of this scaling law is to obtain an EOS table for a fluid from that already obtained for another fluid if it has the right characteristics. Another practical interest of this result is that an asymmetric mixture made up of light and heavy atoms requiring very different time steps can be replaced by a mixture of atoms of equal mass, which facilitates the exploration of the configuration space in a DFT-MD simulation. The scaling law is illustrated by numerical results.

  15. Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.

    PubMed

    Estevis, Eduardo; Basso, Michael R; Combs, Dennis

    2012-01-01

    A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.

  16. The brainstem reticular formation is a small-world, not scale-free, network

    PubMed Central

    Humphries, M.D; Gurney, K; Prescott, T.J

    2005-01-01

    Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219

  17. Projection of incidence rates to a larger population using ecologic variables.

    PubMed

    Frey, C M; Feuer, E J; Timmel, M J

    1994-09-15

    There is wide acceptance of direct standardization of vital rates to adjust for differing age distributions according to the representation within age categories of some referent population. One can use a similar process to standardize, and subsequently project vital rates with respect to continuous, or ratio scale ecologic variables. We obtained from the National Cancer Institute's Surveillance, Epidemiology and End Results (SEER) programme, a 10 per cent subset of the total U.S. population, country-level breast cancer incidence during 1987-1989 for white women aged 50 and over. We applied regression coefficients that relate ecologic factors to SEER incidence to the full national complement of county-level information to produce an age and ecologic factor adjusted rate that may be more representative of the U.S. than the simple age-adjusted SEER incidence. We conducted a validation study using breast cancer mortality data available for the entire U.S. and which supports the appropriateness of this method for projecting rates.

  18. The effect of multiplicity of stellar encounters and the diffusion coefficients in a locally homogeneous three-dimensional stellar medium: Removing the classical divergence

    NASA Astrophysics Data System (ADS)

    Rastorguev, A. S.; Utkin, N. D.; Chumak, O. V.

    2017-08-01

    Agekyan's λ-factor that allows for the effect of multiplicity of stellar encounters with large impact parameters has been used for the first time to directly calculate the diffusion coefficients in the phase space of a stellar system. Simple estimates show that the cumulative effect, i.e., the total contribution of distant encounters to the change in the velocity of a test star, given the multiplicity of stellar encounters, is finite, and the logarithmic divergence inherent in the classical description of diffusion is removed, as was shown previously byKandrup using a different, more complex approach. In this case, the expressions for the diffusion coefficients, as in the classical description, contain the logarithm of the ratio of two independent quantities: the mean interparticle distance and the impact parameter of a close encounter. However, the physical meaning of this logarithmic factor changes radically: it reflects not the divergence but the presence of two characteristic length scales inherent in the stellar medium.

  19. Emotional regulation of fertility decision making: what is the nature and structure of "baby fever"?

    PubMed

    Brase, Gary L; Brase, Sandra L

    2012-10-01

    Baby fever-a visceral physical and emotional desire to have a baby-is well known in popular culture, but has not been empirically studied in psychology. Different theoretical perspectives suggest that desire for a baby is either superfluous to biological sex drives and maternal instincts, a sociocultural phenomenon unrelated to biological or evolutionary forces, or an evolved adpatation for regulating birth timing, proceptive behavior, and life history trajectories. A series of studies (involving 337 undergraduate participants and 853 participants from a general population Internet sample) found that: (a) a simple scale measure could elicit ratings of desire frequency; (b) these ratings exhibited significant sex differences; (c) this sex difference was distinct from a general desire for sexual activity; and (d) these findings generalize to a more diverse online population. Factor analyses of ratings for desire elicitors/inhibitors identified three primary factors underlying baby fever. Baby fever appears to be a real phenomenon, with an underlying multifactorial structure.

  20. Internal (Annular) and Compressible External (Flat Plate) Turbulent Flow Heat Transfer Correlations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence; Smith, Justin

    Here we provide a discussion regarding the applicability of a family of traditional heat transfer correlation based models for several (unit level) heat transfer problems associated with flight heat transfer estimates and internal flow heat transfer associated with an experimental simulation design (Dobranich 2014). Variability between semi-empirical free-flight models suggests relative differences for heat transfer coefficients on the order of 10%, while the internal annular flow behavior is larger with differences on the order of 20%. We emphasize that these expressions are strictly valid only for the geometries they have been derived for e.g. the fully developed annular flow ormore » simple external flow problems. Though, the application of flat plate skin friction estimate to cylindrical bodies is a traditional procedure to estimate skin friction and heat transfer, an over-prediction bias is often observed using these approximations for missile type bodies. As a correction for this over-estimate trend, we discuss a simple scaling reduction factor for flat plate turbulent skin friction and heat transfer solutions (correlations) applied to blunt bodies of revolution at zero angle of attack. The method estimates the ratio between axisymmetric and 2-d stagnation point heat transfer skin friction and Stanton number solution expressions for sub-turbulent Reynolds numbers %3C1x10 4 . This factor is assumed to also directly influence the flat plate results applied to the cylindrical portion of the flow and the flat plate correlations are modified by« less

  1. Near-Field Terahertz Transmission Imaging at 0.210 Terahertz Using a Simple Aperture Technique

    DTIC Science & Technology

    2015-10-01

    This report discusses a simple aperture useful for terahertz near-field imaging at .2010 terahertz ( lambda = 1.43 millimeters). The aperture requires...achieve a spatial resolution of lambda /7. The aperture can be scaled with the assistance of machinery found in conventional machine shops to achieve similar results using shorter terahertz wavelengths.

  2. Levelized Cost of Energy Calculator | Energy Analysis | NREL

    Science.gov Websites

    Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls

  3. 100-point scale evaluating job satisfaction and the results of the 12-item General Health Questionnaire in occupational workers.

    PubMed

    Kawada, Tomoyuki; Yamada, Natsuki

    2012-01-01

    Job satisfaction is an important factor in the occupational lives of workers. In this study, the relationship between one-dimensional scale of job satisfaction and psychological wellbeing was evaluated. A total of 1,742 workers (1,191 men and 551 women) participated. 100-point scale evaluating job satisfaction (0 [extremely dissatisfied] to 100 [extremely satisfied]) and the General Health Questionnaire, 12-item version (GHQ-12) evaluating psychological wellbeing were used. A multiple regression analysis was then used, controlling for gender and age. The change in the GHQ-12 and job satisfaction scores after a two-year interval was also evaluated. The mean age for the subjects was 42.2 years for the men and 36.2 years for the women. The GHQ-12 and job satisfaction scores were significantly correlated in each generation. The partial correlation coefficients between the changes in the two variables, controlling for age, were -0.395 for men and -0.435 for women (p< 0.001). A multiple regression analysis revealed that the 100-point job satisfaction score was associated with the GHQ-12 results (p< 0.001). The adjusted multiple correlation coefficient was 0.275. The 100-point scale, which is a simple and easy tool for evaluating job satisfaction, was significantly associated with psychological wellbeing as judged using the GHQ-12.

  4. Genome-environment association study suggests local adaptation to climate at the regional scale in Fagus sylvatica.

    PubMed

    Pluess, Andrea R; Frank, Aline; Heiri, Caroline; Lalagüe, Hadrien; Vendramin, Giovanni G; Oddou-Muratorio, Sylvie

    2016-04-01

    The evolutionary potential of long-lived species, such as forest trees, is fundamental for their local persistence under climate change (CC). Genome-environment association (GEA) analyses reveal if species in heterogeneous environments at the regional scale are under differential selection resulting in populations with potential preadaptation to CC within this area. In 79 natural Fagus sylvatica populations, neutral genetic patterns were characterized using 12 simple sequence repeat (SSR) markers, and genomic variation (144 single nucleotide polymorphisms (SNPs) out of 52 candidate genes) was related to 87 environmental predictors in the latent factor mixed model, logistic regressions and isolation by distance/environmental (IBD/IBE) tests. SSR diversity revealed relatedness at up to 150 m intertree distance but an absence of large-scale spatial genetic structure and IBE. In the GEA analyses, 16 SNPs in 10 genes responded to one or several environmental predictors and IBE, corrected for IBD, was confirmed. The GEA often reflected the proposed gene functions, including indications for adaptation to water availability and temperature. Genomic divergence and the lack of large-scale neutral genetic patterns suggest that gene flow allows the spread of advantageous alleles in adaptive genes. Thereby, adaptation processes are likely to take place in species occurring in heterogeneous environments, which might reduce their regional extinction risk under CC. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  5. A novel, simple scale for assessing the symptom severity of atrial fibrillation at the bedside: the CCS-SAF scale.

    PubMed

    Dorian, Paul; Cvitkovic, Suzan S; Kerr, Charles R; Crystal, Eugene; Gillis, Anne M; Guerra, Peter G; Mitchell, L Brent; Roy, Denis; Skanes, Allan C; Wyse, D George

    2006-04-01

    The severity of symptoms caused by atrial fibrillation (AF) is extremely variable. Quantifying the effect of AF on patient well-being is important but there is no simple, commonly accepted measure of the effect of AF on quality of life (QoL). Current QoL measures are cumbersome and impractical for clinical use. To create a simple, concise and readily usable AF severity score to facilitate treatment decisions and physician communication. The Canadian Cardiovascular Society (CCS) Severity of Atrial Fibrillation (SAF) Scale is analogous to the CCS Angina Functional Class. The CCS-SAF score is determined using three steps: documentation of possible AF-related symptoms (palpitations, dyspnea, dizziness/syncope, chest pain, weakness/fatigue); determination of symptom-rhythm correlation; and assessment of the effect of these symptoms on patient daily function and QoL. CCS-SAF scores range from 0 (asymptomatic) to 4 (severe impact of symptoms on QoL and activities of daily living). Patients are also categorized by type of AF (paroxysmal versus persistent/permanent). The CCS-SAF Scale will be validated using accepted measures of patient-perceived severity of symptoms and impairment of QoL and will require 'field testing' to ensure its applicability and reproducibility in the clinical setting. This type of symptom severity scale, like the New York Heart Association Functional Class for heart failure symptoms and the CCS Functional Class for angina symptoms, trades precision and comprehensiveness for simplicity and ease of use at the bedside. A common language to quantify AF severity may help to improve patient care.

  6. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  7. Simple motor tasks independently predict extubation failure in critically ill neurological patients

    PubMed Central

    Kutchak, Fernanda Machado; Rieder, Marcelo de Mello; Victorino, Josué Almeida; Meneguzzi, Carla; Poersch, Karla; Forgiarini, Luiz Alberto; Bianchin, Marino Muxfeldt

    2017-01-01

    ABSTRACT Objective: To evaluate the usefulness of simple motor tasks such as hand grasping and tongue protrusion as predictors of extubation failure in critically ill neurological patients. Methods: This was a prospective cohort study conducted in the neurological ICU of a tertiary care hospital in the city of Porto Alegre, Brazil. Adult patients who had been intubated for neurological reasons and were eligible for weaning were included in the study. The ability of patients to perform simple motor tasks such as hand grasping and tongue protrusion was evaluated as a predictor of extubation failure. Data regarding duration of mechanical ventilation, length of ICU stay, length of hospital stay, mortality, and incidence of ventilator-associated pneumonia were collected. Results: A total of 132 intubated patients who had been receiving mechanical ventilation for at least 24 h and who passed a spontaneous breathing trial were included in the analysis. Logistic regression showed that patient inability to grasp the hand of the examiner (relative risk = 1.57; 95% CI: 1.01-2.44; p < 0.045) and protrude the tongue (relative risk = 6.84; 95% CI: 2.49-18.8; p < 0.001) were independent risk factors for extubation failure. Acute Physiology and Chronic Health Evaluation II scores (p = 0.02), Glasgow Coma Scale scores at extubation (p < 0.001), eye opening response (p = 0.001), MIP (p < 0.001), MEP (p = 0.006), and the rapid shallow breathing index (p = 0.03) were significantly different between the failed extubation and successful extubation groups. Conclusions: The inability to follow simple motor commands is predictive of extubation failure in critically ill neurological patients. Hand grasping and tongue protrusion on command might be quick and easy bedside tests to identify neurocritical care patients who are candidates for extubation. PMID:28746528

  8. Outbreak statistics and scaling laws for externally driven epidemics.

    PubMed

    Singh, Sarabjeet; Myers, Christopher R

    2014-04-01

    Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic susceptible-infectious-recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as P(n)∼n-3/2 at the critical point as the system size N becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a "reservoir forcing". We find that the statistics of outbreaks in this system fundamentally differ from those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size O(N1/3) and O(N) depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as O(Nξ), where ξ∈(0,1]∖{2/3} and O((N/lnN)2/3) at the multicritical point.

  9. Advances in time-scale algorithms

    NASA Technical Reports Server (NTRS)

    Stein, S. R.

    1993-01-01

    The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.

  10. Simple scale for assessing level of dependency of patients in general practice.

    PubMed Central

    Willis, J

    1986-01-01

    A rating scale has been designed for assessing the degree of dependency of patients in general practice. An analysis of the elderly and disabled patients in a two doctor practice is given as an example of its use and simplicity. PMID:3087556

  11. Quantitative Evaluation of Musical Scale Tunings

    ERIC Educational Resources Information Center

    Hall, Donald E.

    1974-01-01

    The acoustical and mathematical basis of the problem of tuning the twelve-tone chromatic scale is reviewed. A quantitative measurement showing how well any tuning succeeds in providing just intonation for any specific piece of music is explained and applied to musical examples using a simple computer program. (DT)

  12. Moving on up: Can Results from Simple Aquatic Mesocosm Experiments be Applied Across Broad Spatial Scales?

    EPA Science Inventory

    1. Aquatic ecologists use mesocosm experiments to understand mechanisms driving ecological processes. Comparisons across experiments, and extrapolations to larger scales, are complicated by the use of mesocosms with varying dimensions. We conducted a mesocosm experiment over a vo...

  13. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    NASA Astrophysics Data System (ADS)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  14. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  15. Assessing the impact of nutrient enrichment in estuaries: susceptibility to eutrophication.

    PubMed

    Painting, S J; Devlin, M J; Malcolm, S J; Parker, E R; Mills, D K; Mills, C; Tett, P; Wither, A; Burt, J; Jones, R; Winpenny, K

    2007-01-01

    The main aim of this study was to develop a generic tool for assessing risks and impacts of nutrient enrichment in estuaries. A simple model was developed to predict the magnitude of primary production by phytoplankton in different estuaries from nutrient input (total available nitrogen and/or phosphorus) and to determine likely trophic status. In the model, primary production is strongly influenced by water residence times and relative light regimes. The model indicates that estuaries with low and moderate light levels are the least likely to show a biological response to nutrient inputs. Estuaries with a good light regime are likely to be sensitive to nutrient enrichment, and to show similar responses, mediated only by site-specific geomorphological features. Nixon's scale was used to describe the relative trophic status of estuaries, and to set nutrient and chlorophyll thresholds for assessing trophic status. Estuaries identified as being eutrophic may not show any signs of eutrophication. Additional attributes need to be considered to assess negative impacts. Here, likely detriment to the oxygen regime was considered, but is most applicable to areas of restricted exchange. Factors which limit phytoplankton growth under high nutrient conditions (water residence times and/or light availability) may favour the growth of other primary producers, such as macrophytes, which may have a negative impact on other biological communities. The assessment tool was developed for estuaries in England and Wales, based on a simple 3-category typology determined by geomorphology and relative light levels. Nixon's scale needs to be validated for estuaries in England and Wales, once more data are available on light levels and primary production.

  16. Flow over Canopies with Complex Morphologies

    NASA Astrophysics Data System (ADS)

    Rubol, S.; Ling, B.; Battiato, I.

    2017-12-01

    Quantifying and predicting how submerged vegetation affects the velocity profile of riverine systems is crucial in ecohydraulics to properly assess the water quality and ecological functions or rivers. The state of the art includes a plethora of models to study the flow and transport over submerged canopies. However, most of them are validated against data collected in flume experiments with rigid cylinders. With the objective of investigating the capability of a simple analytical solution for vegetated flow to reproduce and predict the velocity profile of complex shaped flexible canopies, we use the flow model proposed by Battiato and Rubol [WRR 2013] as the analytical approximation of the mean velocity profile above and within the canopy layer. This model has the advantages (i) to threat the canopy layer as a porous medium, whose geometrical properties are associated with macroscopic effective permeability and (ii) to use input parameters that can be estimated by remote sensing techniques, such us the heights of the water level and the canopy. The analytical expressions for the average velocity profile and the discharge are tested against data collected across a wide range of canopy morphologies commonly encountered in riverine systems, such as grasses, woody vegetation and bushes. Results indicate good agreement between the analytical expressions and the data for both simple and complex plant geometry shapes. The rescaled low submergence velocities in the canopy layer followed the same scaling found in arrays of rigid cylinders. In addition, for the dataset analyzed, the Darcy friction factor scaled with the inverse of the bulk Reynolds number multiplied by the ratio of the fluid to turbulent viscosity.

  17. Searching for the right scale in catchment hydrology: the effect of soil spatial variability in simulated states and fluxes

    NASA Astrophysics Data System (ADS)

    Baroni, Gabriele; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine

    2017-04-01

    The advances in computer science and the availability of new detailed data-sets have led to a growing number of distributed hydrological models applied to finer and finer grid resolutions for larger and larger catchment areas. It was argued, however, that this trend does not necessarily guarantee better understanding of the hydrological processes or it is even not necessary for specific modelling applications. In the present study, this topic is further discussed in relation to the soil spatial heterogeneity and its effect on simulated hydrological state and fluxes. To this end, three methods are developed and used for the characterization of the soil heterogeneity at different spatial scales. The methods are applied at the soil map of the upper Neckar catchment (Germany), as example. The different soil realizations are assessed regarding their impact on simulated state and fluxes using the distributed hydrological model mHM. The results are analysed by aggregating the model outputs at different spatial scales based on the Representative Elementary Scale concept (RES) proposed by Refsgaard et al. (2016). The analysis is further extended in the present study by aggregating the model output also at different temporal scales. The results show that small scale soil variabilities are not relevant when the integrated hydrological responses are considered e.g., simulated streamflow or average soil moisture over sub-catchments. On the contrary, these small scale soil variabilities strongly affect locally simulated states and fluxes i.e., soil moisture and evapotranspiration simulated at the grid resolution. A clear trade-off is also detected by aggregating the model output by spatial and temporal scales. Despite the scale at which the soil variabilities are (or are not) relevant is not universal, the RES concept provides a simple and effective framework to quantify the predictive capability of distributed models and to identify the need for further model improvements e.g., finer resolution input. For this reason, the integration in this analysis of all the relevant input factors (e.g., precipitation, vegetation, geology) could provide a strong support for the definition of the right scale for each specific model application. In this context, however, the main challenge for a proper model assessment will be the correct characterization of the spatio- temporal variability of each input factor. Refsgaard, J.C., Højberg, A.L., He, X., Hansen, A.L., Rasmussen, S.H., Stisen, S., 2016. Where are the limits of model predictive capabilities?: Representative Elementary Scale - RES. Hydrol. Process. doi:10.1002/hyp.11029

  18. [Analysing the defect of control design of acupuncture: taking RCTs of treating simple obesity with acupuncture for example].

    PubMed

    Zeng, Yi; Qi, Shulan; Meng, Xing; Chen, Yinyin

    2018-03-12

    By analysing the defect of control design in randomized controlled trials (RCTs) of simple obesity treated with acupuncture and using acupuncture as the contrast, presenting the essential factors which should be taken into account as designing the control of clinical trial to further improve the clinical research. Setting RCTs of acupuncture treating simple obesity as a example, we searched RCTs of acupuncture treating simple obesity with acupuncture control. According to the characteristics of acupuncture therapy, this research sorted and analysed the control approach of intervention from aspects of acupoint selection, the penetration of needle, the depth of insertion, etc, then calculated the amount of difference factor between the two groups and analyzed the rationality. In 15 RCTs meeting the inclusion criterias, 7 published in English, 8 in Chinese, the amount of difference factors between two groups greater than 1 was 6 (40%), 4 published in English abroad, 2 in Chinese, while only 1 was 9 (60%), 3 published in English, 6 in Chinese. Control design of acupuncture in some clinical RCTs is unreasonable for not considering the amount of difference factors between the two groups.

  19. Validation of the UNESP-Botucatu unidimensional composite pain scale for assessing postoperative pain in cattle.

    PubMed

    de Oliveira, Flávia Augusta; Luna, Stelio Pacca Loureiro; do Amaral, Jackson Barros; Rodrigues, Karoline Alves; Sant'Anna, Aline Cristina; Daolio, Milena; Brondani, Juliana Tabarelli

    2014-09-06

    The recognition and measurement of pain in cattle are important in determining the necessity for and efficacy of analgesic intervention. The aim of this study was to record behaviour and determine the validity and reliability of an instrument to assess acute pain in 40 cattle subjected to orchiectomy after sedation with xylazine and local anaesthesia. The animals were filmed before and after orchiectomy to record behaviour. The pain scale was based on previous studies, on a pilot study and on analysis of the camera footage. Three blinded observers and a local observer assessed the edited films obtained during the preoperative and postoperative periods, before and after rescue analgesia and 24 hours after surgery. Re-evaluation was performed one month after the first analysis. Criterion validity (agreement) and item-total correlation using Spearman's coefficient were employed to refine the scale. Based on factor analysis, a unidimensional scale was adopted. The internal consistency of the data was excellent after refinement (Cronbach's α coefficient = 0.866). There was a high correlation (p < 0.001) between the proposed scale and the visual analogue, simple descriptive and numerical rating scales. The construct validity and responsiveness were confirmed by the increase and decrease in pain scores after surgery and rescue analgesia, respectively (p < 0.001). Inter- and intra-observer reliability ranged from moderate to very good. The optimal cut-off point for rescue analgesia was > 4, and analysis of the area under the curve (AUC = 0.963) showed excellent discriminatory ability. The UNESP-Botucatu unidimensional pain scale for assessing acute postoperative pain in cattle is a valid, reliable and responsive instrument with excellent internal consistency and discriminatory ability. The cut-off point for rescue analgesia provides an additional tool for guiding analgesic therapy.

  20. Characterizing and modelling river channel migration rates at a regional scale: Case study of south-east France.

    PubMed

    Alber, Adrien; Piégay, Hervé

    2017-11-01

    An increased awareness by river managers of the importance of river channel migration to sediment dynamics, habitat complexity and other ecosystem functions has led to an advance in the science and practice of identifying, protecting or restoring specific erodible corridors across which rivers are free to migrate. One current challenge is the application of these watershed-specific goals at the regional planning scales (e.g., the European Water Framework Directive). This study provides a GIS-based spatial analysis of the channel migration rates at the regional-scale. As a case study, 99 reaches were sampled in the French part of the Rhône Basin and nearby tributaries of the Mediterranean Sea (111,300 km 2 ). We explored the spatial correlation between the channel migration rate and a set of simple variables (e.g., watershed area, channel slope, stream power, active channel width). We found that the spatial variability of the channel migration rates was primary explained by the gross stream power (R 2  = 0.48) and more surprisingly by the active channel width scaled by the watershed area. The relationship between the absolute migration rate and the gross stream power is generally consistent with the published empirical models for freely meandering rivers, whereas it is less significant for the multi-thread reaches. The discussion focused on methodological constraints for a regional-scale modelling of the migration rates, and the interpretation of the empirical models. We hypothesize that the active channel width scaled by the watershed area is a surrogate for the sediment supply which may be a more critical factor than the bank resistance for explaining the regional-scale variability of the migration rates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Facilitating the Interpretation of English Language Proficiency Scores: Combining Scale Anchoring and Test Score Mapping Methodologies

    ERIC Educational Resources Information Center

    Powers, Donald; Schedl, Mary; Papageorgiou, Spiros

    2017-01-01

    The aim of this study was to develop, for the benefit of both test takers and test score users, enhanced "TOEFL ITP"® test score reports that go beyond the simple numerical scores that are currently reported. To do so, we applied traditional scale anchoring (proficiency scaling) to item difficulty data in order to develop performance…

  2. Flow Chemistry on Multigram Scale: Continuous Synthesis of Boronic Acids within 1 s.

    PubMed

    Hafner, Andreas; Meisenbach, Mark; Sedelmeier, Joerg

    2016-08-05

    The benefits and limitations of a simple continuous flow setup for handling and performing of organolithium chemistry on the multigram scale is described. The developed metalation platform embodies a valuable complement to existing methodologies, as it combines the benefits of Flash Chemistry (chemical synthesis on a time scale of <1 s) with remarkable throughput (g/min) while mitigating the risk of blockages.

  3. Pre-emptive ice cube cryotherapy for reducing pain from local anaesthetic injections for simple lacerations: a randomised controlled trial.

    PubMed

    Song, JaeWoo; Kim, HyukHoon; Park, EunJung; Ahn, Jung Hwan; Yoon, Eunhui; Lampotang, Samsun; Gravenstein, Nikolaus; Choi, SangChun

    2018-02-01

    Subcutaneous local anaesthetic injection can be painful to patients in the ED. We evaluated the effect of cryotherapy by application of an ice cube to the injection site prior to injection in patients with simple lacerations. We conducted a prospective, randomised, controlled trial in consented patients with simple lacerations needing primary repair at a single emergency centre from April to July 2016. We randomly assigned patients undergoing repair for simple lacerations to either the cryotherapy group or the control group (standard care; no cryotherapy or other pretreatment of the injection site). In cryotherapy group subjects, we applied an ice cube (size: 1.5×1.5×1.5 cm) placed inside a sterile glove on the wound at the anticipated subcutaneous lidocaine injection site for 2 min prior to injection. The primary outcome was a subjective numeric rating (0-10 scale) of the perceived pain from the subcutaneous local anaesthetic injections. Secondary outcomes were (a) perceived pain on a numeric scale for cryotherapy itself, that is, pain from contact of the ice cube/glove with the skin and (b) the rate of complications after primary laceration repair. Fifty patients were enrolled, consented and randomised, with 25 in the cryotherapy group and 25 in the control group. The numeric rating scale for subcutaneous anaesthetic injections was median, IQR, 95% CI 2.0 (1 to 3.5), 1.81 to 3.47, respectively, in the cryotherapy group and 5.0 (3 to 7), 3.91 to 6.05 in the control group (Mann-Whitney U=147.50, p=0.001). No wound complications occurred in either group. The numeric rating scale for cryotherapy itself was median, IQR, 95% CI: 2.0 (1 to 3.5), 1.90 to 3.70. Pre-emptive topical injection site cryotherapy lasting 2 min before subcutaneous local anaesthetic injections can significantly reduce perceived pain from subcutaneous local anaesthetic injections in patients presenting for simple laceration repair. KCT0001990. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Uncertainties in scaling factors for ab initio vibrational zero-point energies

    NASA Astrophysics Data System (ADS)

    Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger

    2009-03-01

    Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.

  5. More Thoughts on AG-SG Comparisons and SG Scale Factor Determinations

    NASA Astrophysics Data System (ADS)

    Crossley, David; Calvo, Marta; Rosat, Severine; Hinderer, Jacques

    2018-05-01

    We revisit a number of details that arise when doing joint AG-SG (absolute gravimeter-superconducting gravimeter) calibrations, focusing on the scale factor determination and the AG mean value that derives from the offset. When fitting SG data to AG data, the choice of which time span to use for the SG data can make a difference, as well as the inclusion of a trend that might be present in the fitting. The SG time delay has only a small effect. We review a number of options discussed recently in the literature on whether drops or sets provide the most accurate scale factor, and how to reject drops and sets to get the most consistent result. Two effects are clearly indicated by our tests, one being to smooth the raw SG 1 s (or similar sampling interval) data for times that coincide with AG drops, the other being a second pass in processing to reject residual outliers after the initial fit. Although drops can usefully provide smaller SG calibration errors compared to using set data, set values are more robust to data problems but one has to use the standard error to avoid large uncertainties. When combining scale factor determinations for the same SG at the same station, the expected gradual reduction of the error with each new experiment is consistent with the method of conflation. This is valid even when the SG data acquisition system is changed, or different AG's are used. We also find a relationship between the AG mean values obtained from SG to AG fits with the traditional short-term AG (`site') measurements usually done with shorter datasets. This involves different zero levels and corrections in the AG versus SG processing. Without using the Micro-g FG5 software it is possible to use the SG-derived corrections for tides, barometric pressure, and polar motion to convert an AG-SG calibration experiment into a site measurement (and vice versa). Finally, we provide a simple method for AG users who do not have the FG5-software to find an internal FG5 parameter that allows us to convert AG values between different transfer heights when there is a change in gradient.

  6. More Thoughts on AG-SG Comparisons and SG Scale Factor Determinations

    NASA Astrophysics Data System (ADS)

    Crossley, David; Calvo, Marta; Rosat, Severine; Hinderer, Jacques

    2018-03-01

    We revisit a number of details that arise when doing joint AG-SG (absolute gravimeter-superconducting gravimeter) calibrations, focusing on the scale factor determination and the AG mean value that derives from the offset. When fitting SG data to AG data, the choice of which time span to use for the SG data can make a difference, as well as the inclusion of a trend that might be present in the fitting. The SG time delay has only a small effect. We review a number of options discussed recently in the literature on whether drops or sets provide the most accurate scale factor, and how to reject drops and sets to get the most consistent result. Two effects are clearly indicated by our tests, one being to smooth the raw SG 1 s (or similar sampling interval) data for times that coincide with AG drops, the other being a second pass in processing to reject residual outliers after the initial fit. Although drops can usefully provide smaller SG calibration errors compared to using set data, set values are more robust to data problems but one has to use the standard error to avoid large uncertainties. When combining scale factor determinations for the same SG at the same station, the expected gradual reduction of the error with each new experiment is consistent with the method of conflation. This is valid even when the SG data acquisition system is changed, or different AG's are used. We also find a relationship between the AG mean values obtained from SG to AG fits with the traditional short-term AG (`site') measurements usually done with shorter datasets. This involves different zero levels and corrections in the AG versus SG processing. Without using the Micro-g FG5 software it is possible to use the SG-derived corrections for tides, barometric pressure, and polar motion to convert an AG-SG calibration experiment into a site measurement (and vice versa). Finally, we provide a simple method for AG users who do not have the FG5-software to find an internal FG5 parameter that allows us to convert AG values between different transfer heights when there is a change in gradient.

  7. Public-private delivery of insecticide-treated nets: a voucher scheme in Volta Region, Ghana

    PubMed Central

    Kweku, Margaret; Webster, Jayne; Taylor, Ian; Burns, Susan; Dedzo, McDamien

    2007-01-01

    Background Coverage of vulnerable groups with insecticide-treated nets (ITNs) in Ghana, as in the majority of countries of sub-Saharan Africa is currently low. A voucher scheme was introduced in Volta Region as a possible sustainable delivery system for increasing this coverage through scale-up to other regions. Successful scale-up of public health interventions depends upon optimal delivery processes but operational research for delivery processes in large-scale implementation has been inadequate. Methods A simple tool was developed to monitor numbers of vouchers given to each health facility, numbers issued to pregnant women by the health staff, and numbers redeemed by the distributors back to the management agent. Three rounds of interviews were undertaken with health facility staff, retailers and pregnant women who had attended antenatal clinic (ANC). Results During the one year pilot 25,926 vouchers were issued to eligible women from clinics, which equates to 50.7% of the 51,658 ANC registrants during this time period. Of the vouchers issued 66.7% were redeemed by distributors back to the management agent. Initially, non-issuing of vouchers to pregnant women was mainly due to eligibility criteria imposed by the midwives; later in the year it was due to decisions of the pregnant women, and supply constraints. These in turn were heavily influenced by factors external to the programme: current household ownership of nets, competing ITN delivery strategies, and competition for the limited number of ITNs available in the country from major urban areas of other regions. Conclusion Both issuing and redemption of vouchers should be monitored as factors assumed to influence voucher redemption had an influence on issuing, and vice versa. More evidence is needed on how specific contextual factors influence the success of voucher schemes and other models of delivery of ITNs. Such an evidence base will facilitate optimal strategic decision making so that the delivery model with the best probability of success within a given context is implemented. Rigorous monitoring has an important role to play in the successful scaling-up of delivery of effective public health interventions. PMID:17274810

  8. Evaluating Water Budget Closure Across Spatial Scales: An Observational Approach through Texas Water Observatory

    NASA Astrophysics Data System (ADS)

    Gaur, N.; Jaimes, A.; Vaughan, S.; Morgan, C.; Moore, G. W.; Miller, G. R.; Everett, M. E.; Lawing, M.; Mohanty, B.

    2017-12-01

    Applications varying from improving water conservation practices at the field scale to predicting global hydrology under a changing climate depend upon our ability to achieve water budget closure. 1) Prevalent heterogeneity in soils, geology and land-cover, 2) uncertainties in observations and 3) space-time scales of our control volume and available data are the main factors affecting the percentage of water budget closure that we can achieve. The Texas Water Observatory presents a unique opportunity to observe the major components of the water cycle (namely precipitation, evapotranspiration, root zone soil moisture, streamflow and groundwater) in varying eco-hydrological regions representative of the lower Brazos River basin at multiple scales. The soils in these regions comprise of heavy clays that swell and shrink to create complex preferential pathways in the sub-surface, thus, making the hydrology in this region difficult to quantify. This work evaluates the water budget of the region by varying the control volume in terms of 3 temporal (weekly, monthly and seasonal) and 3 different spatial scales. The spatial scales are 1) Point scale - that is typical for process understanding of water dynamics, 2) Eddy Covariance footprint scale - that is typical of most eco-hydrological applications at the field scale and, 3) Satellite footprint scale- that is typically used in regional and global hydrological analysis. We employed a simple water balance model to evaluate the water budget at all scales. The point scale water budget was assessed using direct observations from hydro-geo-thematically located observation locations within different eddy covariance footprints. At the eddy covariance footprint scale, the sub-surface of each eddy covariance footprint was intensively characterized using electromagnetic induction (EM 38) and the resultant data was used to calculate the inter-point variability to upscale the sub-surface storage while the satellite scale water budget was evaluated using SMAP satellite observations supplemented with reanalysis products. At the point scale, we found differences in sub-surface storage in the same land-cover depending on the landscape position of the observation point while land-cover significantly affected water budget at the larger scales.

  9. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  10. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    NASA Astrophysics Data System (ADS)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.

  11. Towards a simple representation of chalk hydrology in land surface modelling

    NASA Astrophysics Data System (ADS)

    Rahman, Mostaquimur; Rosolem, Rafael

    2017-01-01

    Modelling and monitoring of hydrological processes in the unsaturated zone of chalk, a porous medium with fractures, is important to optimize water resource assessment and management practices in the United Kingdom (UK). However, incorporating the processes governing water movement through a chalk unsaturated zone in a numerical model is complicated mainly due to the fractured nature of chalk that creates high-velocity preferential flow paths in the subsurface. In general, flow through a chalk unsaturated zone is simulated using the dual-porosity concept, which often involves calibration of a relatively large number of model parameters, potentially undermining applications to large regions. In this study, a simplified parameterization, namely the Bulk Conductivity (BC) model, is proposed for simulating hydrology in a chalk unsaturated zone. This new parameterization introduces only two additional parameters (namely the macroporosity factor and the soil wetness threshold parameter for fracture flow activation) and uses the saturated hydraulic conductivity from the chalk matrix. The BC model is implemented in the Joint UK Land Environment Simulator (JULES) and applied to a study area encompassing the Kennet catchment in the southern UK. This parameterization is further calibrated at the point scale using soil moisture profile observations. The performance of the calibrated BC model in JULES is assessed and compared against the performance of both the default JULES parameterization and the uncalibrated version of the BC model implemented in JULES. Finally, the model performance at the catchment scale is evaluated against independent data sets (e.g. runoff and latent heat flux). The results demonstrate that the inclusion of the BC model in JULES improves simulated land surface mass and energy fluxes over the chalk-dominated Kennet catchment. Therefore, the simple approach described in this study may be used to incorporate the flow processes through a chalk unsaturated zone in large-scale land surface modelling applications.

  12. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    NASA Astrophysics Data System (ADS)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-10-01

    We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.

  13. Improving Estimation of Ground Casualty Risk From Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, Chris L.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  14. Extending the range of real time density matrix renormalization group simulations

    NASA Astrophysics Data System (ADS)

    Kennes, D. M.; Karrasch, C.

    2016-03-01

    We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.

  15. The importance of planetary rotation period for ocean heat transport.

    PubMed

    Cullum, J; Stevens, D; Joshi, M

    2014-08-01

    The climate and, hence, potential habitability of a planet crucially depends on how its atmospheric and ocean circulation transports heat from warmer to cooler regions. However, previous studies of planetary climate have concentrated on modeling the dynamics of atmospheres, while dramatically simplifying the treatment of oceans, which neglects or misrepresents the effect of the ocean in the total heat transport. Even the majority of studies with a dynamic ocean have used a simple so-called aquaplanet that has no continental barriers, which is a configuration that dramatically changes the ocean dynamics. Here, the significance of the response of poleward ocean heat transport to planetary rotation period is shown with a simple meridional barrier--the simplest representation of any continental configuration. The poleward ocean heat transport increases significantly as the planetary rotation period is increased. The peak heat transport more than doubles when the rotation period is increased by a factor of ten. There are also significant changes to ocean temperature at depth, with implications for the carbon cycle. There is strong agreement between the model results and a scale analysis of the governing equations. This result highlights the importance of both planetary rotation period and the ocean circulation when considering planetary habitability.

  16. On a Possible Unified Scaling Law for Volcanic Eruption Durations

    PubMed Central

    Cannavò, Flavio; Nunnari, Giuseppe

    2016-01-01

    Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour. PMID:26926425

  17. On a Possible Unified Scaling Law for Volcanic Eruption Durations.

    PubMed

    Cannavò, Flavio; Nunnari, Giuseppe

    2016-03-01

    Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour.

  18. Improving Estimation of Ground Casualty Risk from Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, C.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  19. Leakage and spillover effects of forest management on carbon storage: theoretical insights from a simple model

    NASA Astrophysics Data System (ADS)

    Magnani, Federico; Dewar, Roderick C.; Borghetti, Marco

    2009-04-01

    Leakage (spillover) refers to the unintended negative (positive) consequences of forest carbon (C) management in one area on C storage elsewhere. For example, the local C storage benefit of less intensive harvesting in one area may be offset, partly or completely, by intensified harvesting elsewhere in order to meet global timber demand. We present the results of a theoretical study aimed at identifying the key factors determining leakage and spillover, as a prerequisite for more realistic numerical studies. We use a simple model of C storage in managed forest ecosystems and their wood products to derive approximate analytical expressions for the leakage induced by decreasing the harvesting frequency of existing forest, and the spillover induced by establishing new plantations, assuming a fixed total wood production from local and remote (non-local) forests combined. We find that leakage and spillover depend crucially on the growth rates, wood product lifetimes and woody litter decomposition rates of local and remote forests. In particular, our results reveal critical thresholds for leakage and spillover, beyond which effects of forest management on remote C storage exceed local effects. Order of magnitude estimates of leakage indicate its potential importance at global scales.

  20. Development and validation of a questionnaire evaluating patient anxiety during Magnetic Resonance Imaging: the Magnetic Resonance Imaging-Anxiety Questionnaire (MRI-AQ).

    PubMed

    Ahlander, Britt-Marie; Årestedt, Kristofer; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2016-06-01

    To develop and validate a new instrument measuring patient anxiety during Magnetic Resonance Imaging examinations, Magnetic Resonance Imaging- Anxiety Questionnaire. Questionnaires measuring patients' anxiety during Magnetic Resonance Imaging examinations have been the same as used in a wide range of conditions. To learn about patients' experience during examination and to evaluate interventions, a specific questionnaire measuring patient anxiety during Magnetic Resonance Imaging is needed. Psychometric cross-sectional study with test-retest design. A new questionnaire, Magnetic Resonance Imaging-Anxiety Questionnaire, was designed from patient expressions of anxiety in Magnetic Resonance Imaging-scanners. The sample was recruited between October 2012-October 2014. Factor structure was evaluated with exploratory factor analysis and internal consistency with Cronbach's alpha. Criterion-related validity, known-group validity and test-retest was calculated. Patients referred for Magnetic Resonance Imaging of either the spine or the heart, were invited to participate. The development and validation of Magnetic Resonance Imaging-Anxiety Questionnaire resulted in 15 items consisting of two factors. Cronbach's alpha was found to be high. Magnetic Resonance Imaging-Anxiety Questionnaire correlated higher with instruments measuring anxiety than with depression scales. Known-group validity demonstrated a higher level of anxiety for patients undergoing Magnetic Resonance Imaging scan of the heart than for those examining the spine. Test-retest reliability demonstrated acceptable level for the scale. Magnetic Resonance Imaging-Anxiety Questionnaire bridges a gap among existing questionnaires, making it a simple and useful tool for measuring patient anxiety during Magnetic Resonance Imaging examinations. © 2016 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  1. The School Anxiety Scale-Teacher Report (SAS-TR): translation and psychometric properties of the Iranian version.

    PubMed

    Hajiamini, Zahra; Mohamadi, Ashraf; Ebadi, Abbas; Fathi- Ashtiani, Ali; Tavousi, Mahmoud; Montazeri, Ali

    2012-07-18

    The School Anxiety Scale-Teacher Report (SAS-TR) was designed to assess anxiety in children at school. The SAS-TR is a proxy rated measure and could assess social anxiety, generalized anxiety and also gives a total anxiety score. This study aimed to translate and validate the SAS-TR in Iran. The translation and cultural adaptation of the original questionnaire were carried out in accordance with the published guidelines. A sample of students participated in the study. Reliability was estimated using internal consistency and test-retest analysis. Validity was assessed using content validity. The factor structure of the questionnaire was extracted by performing both exploratory and confirmatory factor analyses. In all 200 elementary students aged 6 to 10 years were studied. Considering the recommended cut-off values, overall the prevalence of high anxiety condition in elementary students was found to be 21 %. Cronbach's alpha coefficient for the Iranian SAS-TR was 0.92 and intraclass correlation coefficient (ICC) was found to be 0.81. The principal component analysis indicated a two-factor structure for the questionnaire (generalized and social anxiety) that jointly accounted for 55.3 % of variances observed. The confirmatory factory analysis also indicated a good fit to the data for the two-latent structure of the questionnaire. In general the findings suggest that the Iranian version of SAS-TR has satisfactory reliability, and validity for measuring anxiety in 6 to 10 years old children in Iran. It is simple and easy to use and now can be applied in future studies.

  2. Efficient production of human acidic fibroblast growth factor in pea (Pisum sativum L.) plants by agroinfection of germinated seeds

    PubMed Central

    2011-01-01

    Background For efficient and large scale production of recombinant proteins in plants transient expression by agroinfection has a number of advantages over stable transformation. Simple manipulation, rapid analysis and high expression efficiency are possible. In pea, Pisum sativum, a Virus Induced Gene Silencing System using the pea early browning virus has been converted into an efficient agroinfection system by converting the two RNA genomes of the virus into binary expression vectors for Agrobacterium transformation. Results By vacuum infiltration (0.08 Mpa, 1 min) of germinating pea seeds with 2-3 cm roots with Agrobacteria carrying the binary vectors, expression of the gene for Green Fluorescent Protein as marker and the gene for the human acidic fibroblast growth factor (aFGF) was obtained in 80% of the infiltrated developing seedlings. Maximal production of the recombinant proteins was achieved 12-15 days after infiltration. Conclusions Compared to the leaf injection method vacuum infiltration of germinated seeds is highly efficient allowing large scale production of plants transiently expressing recombinant proteins. The production cycle of plants for harvesting the recombinant protein was shortened from 30 days for leaf injection to 15 days by applying vacuum infiltration. The synthesized aFGF was purified by heparin-affinity chromatography and its mitogenic activity on NIH 3T3 cells confirmed to be similar to a commercial product. PMID:21548923

  3. Assessment of tinnitus-related impairments and disabilities using the German THI-12: sensitivity and stability of the scale over time.

    PubMed

    Görtelmeyer, Roman; Schmidt, Jürgen; Suckfüll, Markus; Jastreboff, Pawel; Gebauer, Alexander; Krüger, Hagen; Wittmann, Werner

    2011-08-01

    To evaluate the reliability, dimensionality, predictive validity, construct validity, and sensitivity to change of the THI-12 total and sub-scales as diagnostic aids to describe and quantify tinnitus-evoked reactions and evaluate treatment efficacy. Explorative analysis of the German tinnitus handicap inventory (THI-12) to assess potential sensitivity to tinnitus therapy in placebo-controlled randomized studies. Correlation analysis, including Cronbach's coefficient α and explorative common factor analysis (EFA), was conducted within and between assessments to demonstrate the construct validity, dimensionality, and factorial structure of the THI-12. N = 618 patients suffering from subjective tinnitus who were to be screened to participate in a randomized, placebo-controlled, 16-week, longitudinal study. The THI-12 can reliably diagnose tinnitus-related impairments and disabilities and assess changes over time. The test-retest coefficient for neighboured visits was r > 0.69, the internal consistency of the THI-12 total score was α ≤ 0.79 and α ≤ 0.89 at subsequent visits. Predictability of THI-12 total score and overall variance increased with successive measurements. The three-factorial structure allowed for evaluation of factors that affect aspects of patients' health-related quality of life. The THI-12, with its three-factorial structure, is a simple, reliable, and valid instrument for the diagnosis and assessment of tinnitus and associated impairment over time.

  4. A simple model of hohlraum power balance and mitigation of SRS

    DOE PAGES

    Albright, Brian J.; Montgomery, David S.; Yin, Lin; ...

    2016-04-01

    A simple energy balance model has been obtained for laser-plasma heating in indirect drive hohlraum plasma that allows rapid temperature scaling and evolution with parameters such as plasma density and composition. Furthermore, this model enables assessment of the effects on plasma temperature of, e.g., adding high-Z dopant to the gas fill or magnetic fields.

  5. DESIGN OF A SMALL – SCALE SOLAR CHIMNEY FOR SUSTAINABLE POWER

    EPA Science Inventory

    After several months of design and testing it has been determined that a small scale solar chimney can be built using nearly any local materials and simple hand tools without needing superior construction knowledge. The biggest obstacle to over come was the weather conditions....

  6. Some Results on Proper Eigenvalues and Eigenvectors with Applications to Scaling.

    ERIC Educational Resources Information Center

    McDonald, Roderick P.; And Others

    1979-01-01

    Problems in avoiding the singularity problem in analyzing matrices for optimal scaling are addressed. Conditions are given under which the stationary points and values of a ratio of quadratic forms in two singular matrices can be obtained by a series of simple matrix operations. (Author/JKS)

  7. Assessing the Cognitive Regulation of Emotion in Depressed Stroke Patients

    ERIC Educational Resources Information Center

    Turner, Margaret A.; Andrewes, David G.

    2010-01-01

    This study evaluated the psychometric properties of a simple scale for measuring positive interpersonal attitudes of depressed stroke patients, with regard to their cognitive limitations. Two versions of the Attitudes Towards Relationships Scale were developed and administered to depressed stroke (n = 48) and control rheumatic/orthopaedic (n = 45)…

  8. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  9. Discrete angle radiative transfer. 3. Numerical results and meteorological applications

    NASA Astrophysics Data System (ADS)

    Davis, Anthony; Gabriel, Philip; Lovejoy, Shuan; Schertzer, Daniel; Austin, Geoffrey L.

    1990-07-01

    In the first two installments of this series, various cloud models were studied with angularly discretized versions of radiative transfer. This simplification allows the effects of cloud inhomogeneity to be studied in some detail. The families of scattering media investigated were those whose members are related to each other by scale changing operations that involve only ratios of their sizes (``scaling'' geometries). In part 1 it was argued that, in the case of conservative scattering, the reflection and transmission coefficients of these families should vary algebraically with cloud size in the asymptotically thick regime, thus allowing us to define scaling exponents and corresponding ``universality'' classes. In part 2 this was further justified (by using analytical renormalization methods) for homogeneous clouds in one, two, and three spatial dimensions (i.e., slabs, squares, or triangles and cubes, respectively) as well as for a simple deterministic fractal cloud. Here the same systems are studied numerically. The results confirm (1) that renormalization is qualitatively correct (while quantitatively poor), and (2) more importantly, they support the conjecture that the universality classes of discrete and continuous angle radiative transfer are generally identical. Additional numerical results are obtained for a simple class of scale invariant (fractal) clouds that arises when modeling the concentration of cloud liquid water into ever smaller regions by advection in turbulent cascades. These so-called random ``β models'' are (also) characterized by a single fractal dimension. Both open and cyclical horizontal boundary conditions are considered. These and previous results are constrasted with plane-parallel predictions, and measures of systematic error are defined as ``packing factors'' which are found to diverge algebraically with average optical thickness and are significant even when the scaling behavior is very limited in range. Several meteorological consequences, especially concerning the ``albedo paradox'' and global climate models, are discussed, and future directions of investigation are outlined. Throughout this series it is shown that spatial variability of the optical density field (i.e., cloud geometry) determines the exponent of optical thickness (hence universality class), whereas changes in phase function can only affect the multiplicative prefactors. It is therefore argued that much more emphasis should be placed on modeling spatial inhomogeneity and investigating its radiative signature, even if this implies crude treatment of the angular aspect of the radiative transfer problem.

  10. Simple framework for understanding the universality of the maximum drag reduction asymptote in turbulent flow of polymer solutions

    NASA Astrophysics Data System (ADS)

    Li, Chang-Feng; Sureshkumar, Radhakrishna; Khomami, Bamin

    2015-10-01

    Self-consistent direct numerical simulations of turbulent channel flows of dilute polymer solutions exhibiting friction drag reduction (DR) show that an effective Deborah number defined as the ratio of polymer relaxation time to the time scale of fluctuations in the vorticity in the mean flow direction remains O (1) from the onset of DR to the maximum drag reduction (MDR) asymptote. However, the ratio of the convective time scale associated with streamwise vorticity fluctuations to the vortex rotation time decreases with increasing DR, and the maximum drag reduction asymptote is achieved when these two time scales become nearly equal. Based on these observations, a simple framework is proposed that adequately describes the influence of polymer additives on the extent of DR from the onset of DR to MDR as well as the universality of the MDR in wall-bounded turbulent flows with polymer additives.

  11. Simple framework for understanding the universality of the maximum drag reduction asymptote in turbulent flow of polymer solutions.

    PubMed

    Li, Chang-Feng; Sureshkumar, Radhakrishna; Khomami, Bamin

    2015-10-01

    Self-consistent direct numerical simulations of turbulent channel flows of dilute polymer solutions exhibiting friction drag reduction (DR) show that an effective Deborah number defined as the ratio of polymer relaxation time to the time scale of fluctuations in the vorticity in the mean flow direction remains O(1) from the onset of DR to the maximum drag reduction (MDR) asymptote. However, the ratio of the convective time scale associated with streamwise vorticity fluctuations to the vortex rotation time decreases with increasing DR, and the maximum drag reduction asymptote is achieved when these two time scales become nearly equal. Based on these observations, a simple framework is proposed that adequately describes the influence of polymer additives on the extent of DR from the onset of DR to MDR as well as the universality of the MDR in wall-bounded turbulent flows with polymer additives.

  12. Simple spatial scaling rules behind complex cities.

    PubMed

    Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2017-11-28

    Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.

  13. A multiple scales approach to maximal superintegrability

    NASA Astrophysics Data System (ADS)

    Gubbiotti, G.; Latini, D.

    2018-07-01

    In this paper we present a simple, algorithmic test to establish if a Hamiltonian system is maximally superintegrable or not. This test is based on a very simple corollary of a theorem due to Nekhoroshev and on a perturbative technique called the multiple scales method. If the outcome is positive, this test can be used to suggest maximal superintegrability, whereas when the outcome is negative it can be used to disprove it. This method can be regarded as a finite dimensional analog of the multiple scales method as a way to produce soliton equations. We use this technique to show that the real counterpart of a mechanical system found by Jules Drach in 1935 is, in general, not maximally superintegrable. We give some hints on how this approach could be applied to classify maximally superintegrable systems by presenting a direct proof of the well-known Bertrand’s theorem.

  14. Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.

    PubMed

    Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B

    2012-06-01

    This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.

  15. A New Dual-Pore Formation Factor Model: A Percolation Network Study and Comparison to Experimental Data

    NASA Astrophysics Data System (ADS)

    Tang, Y. B.; Li, M.; Bernabe, Y.

    2014-12-01

    We modeled the electrical transport behavior of dual-pore carbonate rocks in this paper. Based on experimental data of a carbonate reservoir in China, we simply considered the low porosity samples equivalent to the matrix (micro-pore system) of the high porosity samples. For modeling the bimodal porous media, we considered that the matrix is homogeneous and interconnected. The connectivity and the pore size distribution of macro-pore system are varied randomly. Both pore systems are supposed to act electrically in parallel, connected at the nodes, where the fluid exchange takes place, an approach previously used by Bauer et al. (2012). Then, the effect of the properties of matrix, the pore size distribution and connectivity of macro-pore system on petrophysical properties of carbonates can be investigated. We simulated electrical current through networks in three-dimensional simple cubic (SC) and body-center cubic (BCC) with different coordination numbers and different pipe radius distributions of macro-pore system. Based on the simulation results, we found that the formation factor obeys a "universal" scaling relationship (i.e. independent of lattice type), 1/F∝eγz, where γ is a function of the normalized standard deviation of the pore radius distribution of macro-pore system and z is the coordination number of macro-pore system. This relationship is different from the classic "universal power law" in percolation theory. A formation factor model was inferred on the basis of the scaling relationship mentioned above and several scale-invariant quantities (such as hydraulic radius rH and throat length l of macro-pore). Several methods were developed to estimate corresponding parameters of the new model with conventional core analyses. It was satisfactorily tested against experimental data, including some published experimental data. Furthermore, the relationship between water saturation and resistivity in dual-pore carbonates was discussed based on the new model.

  16. Ground-Support Algorithms for Simulation, Processing, and Calibration of Barnes Static Earth Sensor Measurements: Applications to Tropical Rainfall Measuring Mission Observatory

    NASA Technical Reports Server (NTRS)

    Natanson, G. A.

    1997-01-01

    New algorithms are described covering the simulation, processing, and calibration of penetration angles of the Barnes static Earth sensor assembly (SESA) as implemented in the Goddard Space Flight Center Flight Dynamics Division ground support system for the Tropical Rainfall Measuring Mission (TRMM) Observatory. The new treatment involves a detailed analysis of the measurements by individual quadrants. It is shown that, to a good approximation, individual quadrant misalignments can be treated simply as penetration angle biases. Simple formulas suitable for real-time applications are introduced for computing quadrant-dependent effects. The simulator generates penetration angles by solving a quadratic equation with coefficients uniquely determined by the spacecraft's position and the quadrant's orientation in GeoCentric Inertial (GCI) coordinates. Measurement processing for attitude determination is based on linearized equations obtained by expanding the coefficients of the aforementioned quadratic equation as a Taylor series in both the Earth oblateness coefficient (alpha approx. 1/150) and the angle between the pointing axis and the geodetic nadir vector. A simple formula relating a measured value of the penetration angle to the deviation of the Earth-pointed axis from the geodetic nadir vector is derived. It is shown that even near the very edge of the quadrant's Field Of View (FOV), attitude errors resulting from quadratic effects are a few hundredths of a degree, which is small compared to the attitude determination accuracy requirement (0.18 degree, 3 sigma) of TRMM. Calibration of SESA measurements is complicated by a first-order filtering used in the TRMM onboard algorithm to compute penetration angles from raw voltages. A simple calibration scheme is introduced where these complications are avoided by treating penetration angles as the primary raw measurements, which are adjusted using biases and scale factors. In addition to three misalignment parameters, the calibration state vector contains only two average penetration angle biases (one per each pair of opposite quadrants) since, because of the very narrow sensor FOV (+/- 2.6 degrees), differences between biases of the penetration angles measured by opposite quadrants cannot be distinguished from roll and pitch sensor misalignments. After calibration, the estimated misalignments and average penetration angle biases are converted to the four penetration angle biases and to the yaw misalignment angle. The resultant biases and the estimated scale factors are finally used to update the coefficients necessary for onboard computations of penetration angles from measured voltages.

  17. Low energy probes of PeV scale sfermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altmannshofer, Wolfgang; Harnik, Roni; Zupan, Jure

    2013-11-27

    We derive bounds on squark and slepton masses in mini-split supersymmetry scenario using low energy experiments. In this setup gauginos are at the TeV scale, while sfermions are heavier by a loop factor. We cover the most sensitive low energy probes including electric dipole moments (EDMs), meson oscillations and charged lepton flavor violation (LFV) transitions. A leading log resummation of the large logs of gluino to sfermion mass ratio is performed. A sensitivity to PeV squark masses is obtained at present from kaon mixing measurements. A number of observables, including neutron EDMs, mu->e transitions and charmed meson mixing, will startmore » probing sfermion masses in the 100 TeV-1000 TeV range with the projected improvements in the experimental sensitivities. We also discuss the implications of our results for a variety of models that address the flavor hierarchy of quarks and leptons. We find that EDM searches will be a robust probe of models in which fermion masses are generated radiatively, while LFV searches remain sensitive to simple-texture based flavor models.« less

  18. Resource allocation for epidemic control in metapopulations.

    PubMed

    Ndeffo Mbah, Martial L; Gilligan, Christopher A

    2011-01-01

    Deployment of limited resources is an issue of major importance for decision-making in crisis events. This is especially true for large-scale outbreaks of infectious diseases. Little is known when it comes to identifying the most efficient way of deploying scarce resources for control when disease outbreaks occur in different but interconnected regions. The policy maker is frequently faced with the challenge of optimizing efficiency (e.g. minimizing the burden of infection) while accounting for social equity (e.g. equal opportunity for infected individuals to access treatment). For a large range of diseases described by a simple SIRS model, we consider strategies that should be used to minimize the discounted number of infected individuals during the course of an epidemic. We show that when faced with the dilemma of choosing between socially equitable and purely efficient strategies, the choice of the control strategy should be informed by key measurable epidemiological factors such as the basic reproductive number and the efficiency of the treatment measure. Our model provides new insights for policy makers in the optimal deployment of limited resources for control in the event of epidemic outbreaks at the landscape scale.

  19. Temperature dependence of ion transport: the compensated Arrhenius equation.

    PubMed

    Petrowsky, Matt; Frech, Roger

    2009-04-30

    The temperature-dependent conductivity originating in a thermally activated process is often described by a simple Arrhenius expression. However, this expression provides a poor description of the data for organic liquid electrolytes and amorphous polymer electrolytes. Here, we write the temperature dependence of the conductivity as an Arrhenius expression and show that the experimentally observed non-Arrhenius behavior is due to the temperature dependence of the dielectric constant contained in the exponential prefactor. Scaling the experimentally measured conductivities to conductivities at a chosen reference temperature leads to a "compensated" Arrhenius equation that provides an excellent description of temperature-dependent conductivities. A plot of the prefactors as a function of the solvent dielectric constant results in a single master curve for each family of solvents. These data suggest that ion transport in these and related systems is governed by a single activated process differing only in the activation energy for each family of solvents. Connection is made to the shift factor used to describe electrical and mechanical relaxation in a wide range of phenomena, suggesting that this scaling procedure might have broad applications.

  20. Predictive model for convective flows induced by surface reactivity contrast

    NASA Astrophysics Data System (ADS)

    Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali

    2018-05-01

    Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.

  1. Bas-relief generation using adaptive histogram equalization.

    PubMed

    Sun, Xianfang; Rosin, Paul L; Martin, Ralph R; Langbein, Frank C

    2009-01-01

    An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.

  2. Exact axially symmetric galactic dynamos

    NASA Astrophysics Data System (ADS)

    Henriksen, R. N.; Woodfinden, A.; Irwin, J. A.

    2018-05-01

    We give a selection of exact dynamos in axial symmetry on a galactic scale. These include some steady examples, at least one of which is wholly analytic in terms of simple functions and has been discussed elsewhere. Most solutions are found in terms of special functions, such as associated Lagrange or hypergeometric functions. They may be considered exact in the sense that they are known to any desired accuracy in principle. The new aspect developed here is to present scale-invariant solutions with zero resistivity that are self-similar in time. The time dependence is either a power law or an exponential factor, but since the geometry of the solution is self-similar in time we do not need to fix a time to study it. Several examples are discussed. Our results demonstrate (without the need to invoke any other mechanisms) X-shaped magnetic fields and (axially symmetric) magnetic spiral arms (both of which are well observed and documented) and predict reversing rotation measures in galaxy haloes (now observed in the CHANG-ES sample) as well as the fact that planar magnetic spirals are lifted into the galactic halo.

  3. One-step fabrication of robust superhydrophobic and superoleophilic surfaces with self-cleaning and oil/water separation function.

    PubMed

    Zhang, Zhi-Hui; Wang, Hu-Jun; Liang, Yun-Hong; Li, Xiu-Juan; Ren, Lu-Quan; Cui, Zhen-Quan; Luo, Cheng

    2018-03-01

    Superhydrophobic surfaces have great potential for application in self-cleaning and oil/water separation. However, the large-scale practical applications of superhydrophobic coating surfaces are impeded by many factors, such as complicated fabrication processes, the use of fluorinated reagents and noxious organic solvents and poor mechanical stability. Herein, we describe the successful preparation of a fluorine-free multifunctional coating without noxious organic solvents that was brushed, dipped or sprayed onto glass slides and stainless-steel meshes as substrates. The obtained multifunctional superhydrophobic and superoleophilic surfaces (MSHOs) demonstrated self-cleaning abilities even when contaminated with or immersed in oil. The superhydrophobic surfaces were robust and maintained their water repellency after being scratched with a knife or abraded with sandpaper for 50 cycles. In addition, stainless-steel meshes sprayed with the coating quickly separated various oil/water mixtures with a high separation efficiency (>93%). Furthermore, the coated mesh maintained a high separation efficiency above 95% over 20 cycles of separation. This simple and effective strategy will inspire the large-scale fabrication of multifunctional surfaces for practical applications in self-cleaning and oil/water separation.

  4. On Flood Frequency in Urban Areas under Changing Conditions and Implications on Stormwater Infrastructure Planning and Design

    NASA Astrophysics Data System (ADS)

    Norouzi, A.; Habibi, H.; Nazari, B.; Noh, S.; Seo, D. J.; Zhang, Y.

    2016-12-01

    With urbanization and climate change, many areas in the US and abroad face increasing threats of flash flooding. Due to nonstationarities arising from changes in land cover and climate, however, it is not readily possible to project how such changes may modify flood frequency. In this work, we describe a simple spatial stochastic model for rainfall-to-areal runoff in urban areas, evaluate climatological mean and variance of mean areal runoff (MAR) over a range of catchment scale, translate them into runoff frequency, which is used as a proxy for flood frequency, and assess its sensitivity to precipitation, imperviousness and soil, and their changes as a function of catchment scale and magnitude of precipitation. The findings indicate that, due to large sensitivity of frequency of MAR to multiple hydrometeorological and physiographic factors, estimation of flood frequency for urban catchments is inherently more uncertain. The approach used in this work is useful in developing bounds for flood frequencies in urban areas under nonstationary conditions arising from urbanization and climate change.

  5. Analysis of transport in gyrokinetic tokamaks

    NASA Astrophysics Data System (ADS)

    Mynick, H. E.; Parker, S. E.

    1995-06-01

    Progress toward a detailed understanding of the transport in full-volume gyrokinetic simulations of tokamaks is described. The transition between the two asymptotic regimes (large and small) of scaling of the heat flux with system size a/ρg reported earlier is explained, along with the approximate size at which the transition occurs. The larger systems have transport close to that predicted by the simple standard estimates for transport by drift-wave turbulence (viz., Bohm or gyro-Bohm) in scaling with a/ρg, temperature, magnetic field, ion mass, safety factor, and minor radius, but lying much closer to Bohm, which seems the result better supported theoretically. The characteristic downshift in the spectrum observed previously in going from the linear to the turbulent phase is consistent with the numerically inferred coupling coefficients Mkpq of a reduced description of the system. An explanation of the downshift is given from the resemblance of the reduced system to the Hasegawa-Mima or Terry-Horton systems. These manifest an analogous downshift in slab geometry, and have Mkpq resembling those inferred from the gyrokinetic (GK) data.

  6. Time-resolved spectroscopy using a chopper wheel as a fast shutter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shicong; Wendt, Amy E.; Boffard, John B.

    Widely available, small form-factor, fiber-coupled spectrometers typically have a minimum exposure time measured in milliseconds, and thus cannot be used directly for time-resolved measurements at the microsecond level. Spectroscopy at these faster time scales is typically done with an intensified charge coupled device (CCD) system where the image intensifier acts as a “fast” electronic shutter for the slower CCD array. In this paper, we describe simple modifications to a commercially available chopper wheel system to allow it to be used as a “fast” mechanical shutter for gating a fiber-coupled spectrometer to achieve microsecond-scale time-resolved optical measurements of a periodically pulsedmore » light source. With the chopper wheel synchronized to the pulsing of the light source, the time resolution can be set to a small fraction of the pulse period by using a chopper wheel with narrow slots separated by wide spokes. Different methods of synchronizing the chopper wheel and pulsing of the light sources are explored. The capability of the chopper wheel system is illustrated with time-resolved measurements of pulsed plasmas.« less

  7. Length-scale crossover of the hydrophobic interaction in a coarse-grained water model

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2013-11-01

    It has been difficult to establish a clear connection between the hydrophobic interaction among small molecules typically studied in molecular simulations (a weak, oscillatory force) and that found between large, macroscopic surfaces in experiments (a strong, monotonic force). Here, we show that both types of interaction can emerge with a simple, core-softened water model that captures water's unique pairwise structure. As in hydrophobic hydration, we find that the hydrophobic interaction manifests a length-scale dependence, exhibiting distinct driving forces in the molecular and macroscopic regimes. Moreover, the ability of this simple model to capture both regimes suggests that several features of the hydrophobic force can be understood merely through water's pair correlations.

  8. Length-scale crossover of the hydrophobic interaction in a coarse-grained water model.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2013-11-01

    It has been difficult to establish a clear connection between the hydrophobic interaction among small molecules typically studied in molecular simulations (a weak, oscillatory force) and that found between large, macroscopic surfaces in experiments (a strong, monotonic force). Here, we show that both types of interaction can emerge with a simple, core-softened water model that captures water's unique pairwise structure. As in hydrophobic hydration, we find that the hydrophobic interaction manifests a length-scale dependence, exhibiting distinct driving forces in the molecular and macroscopic regimes. Moreover, the ability of this simple model to capture both regimes suggests that several features of the hydrophobic force can be understood merely through water's pair correlations.

  9. Visualization of atomic-scale phenomena in superconductors: application to FeSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas

    Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less

  10. Visualization of atomic-scale phenomena in superconductors: application to FeSe

    DOE PAGES

    Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas; ...

    2014-10-31

    Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less

  11. The future of primordial features with large-scale structure surveys

    NASA Astrophysics Data System (ADS)

    Chen, Xingang; Dvorkin, Cora; Huang, Zhiqi; Namjoo, Mohammad Hossein; Verde, Licia

    2016-11-01

    Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial Universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopic and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce the errors of the amplitudes of the features by a factor of 5 or more, including several interesting candidates identified in the recent Planck data. Therefore, LSS surveys offer an impressive opportunity for primordial feature discovery in the next decade or two. We also compare the advantages of both types of surveys.

  12. Geoscience Meets Social Science: A Flexible Data Driven Approach for Developing High Resolution Population Datasets at Global Scale

    NASA Astrophysics Data System (ADS)

    Rose, A.; McKee, J.; Weber, E.; Bhaduri, B. L.

    2017-12-01

    Leveraging decades of expertise in population modeling, and in response to growing demand for higher resolution population data, Oak Ridge National Laboratory is now generating LandScan HD at global scale. LandScan HD is conceived as a 90m resolution population distribution where modeling is tailored to the unique geography and data conditions of individual countries or regions by combining social, cultural, physiographic, and other information with novel geocomputation methods. Similarities among these areas are exploited in order to leverage existing training data and machine learning algorithms to rapidly scale development. Drawing on ORNL's unique set of capabilities, LandScan HD adapts highly mature population modeling methods developed for LandScan Global and LandScan USA, settlement mapping research and production in high-performance computing (HPC) environments, land use and neighborhood mapping through image segmentation, and facility-specific population density models. Adopting a flexible methodology to accommodate different geographic areas, LandScan HD accounts for the availability, completeness, and level of detail of relevant ancillary data. Beyond core population and mapped settlement inputs, these factors determine the model complexity for an area, requiring that for any given area, a data-driven model could support either a simple top-down approach, a more detailed bottom-up approach, or a hybrid approach.

  13. Upstream Density for Plasma Detachment with Conventional and Lithium Vapor-Box Divertors

    NASA Astrophysics Data System (ADS)

    Goldston, Rj; Schwartz, Ja

    2016-10-01

    Fusion power plants are likely to require detachment of the divertor plasma from material targets. The lithium vapor box divertor is designed to achieve this, while limiting the flux of lithium vapor to the main plasma. We develop a simple model of near-detachment to evaluate the required upstream plasma density, for both conventional and lithium vapor-box divertors, based on particle and dynamic pressure balance between up- and down-stream, at near-detachment conditions. A remarkable general result is found, not just for lithium-induced detachment, that the upstream density divided by the Greenwald-limit density scales as (P 5 / 8 /B 3 / 8) Tdet1 / 2 / (ɛcool + γTdet) , with no explicit size scaling. Tdet is the temperature just before strong pressure loss, 1/2 of the ionization potential of the dominant recycling species, ɛcool is the average plasma energy lost per injected hydrogenic and impurity atom, and γ is the sheath heat transmission factor. A recent 1-D calculation agrees well with this scaling. The implication is that the plasma exhaust problem cannot be solved by increasing R. Instead significant innovation, such as the lithium vapor box divertor, will be required. This work supported by DOE Contract No. DE-AC02-09CH11466.

  14. Climatological temperature senstivity of soil carbon turnover: Observations, simple scaling models, and ESMs

    NASA Astrophysics Data System (ADS)

    Koven, C. D.; Hugelius, G.; Lawrence, D. M.; Wieder, W. R.

    2016-12-01

    The projected loss of soil carbon to the atmosphere resulting from climate change is a potentially large but highly uncertain feedback to warming. The magnitude of this feedback is poorly constrained by observations and theory, and is disparately represented in Earth system models. To assess the likely long-term response of soils to climate change, spatial gradients in soil carbon turnover times can identify broad-scale and long-term controls on the rate of carbon cycling as a function of climate and other factors. Here we show that the climatological temperature control on carbon turnover in the top meter of global soils is more sensitive in cold climates than in warm ones. We present a simplified model that explains the high cold-climate sensitivity using only the physical scaling of soil freeze-thaw state across climate gradients. Critically, current Earth system models (ESMs) fail to capture this pattern, however it emerges from an ESM that explicitly resolves vertical gradients in soil climate and turnover. The weak tropical temperature sensitivity emerges from a different model that explicitly resolves mineralogical control on decomposition. These results support projections of strong future carbon-climate feedbacks from northern soils and demonstrate a method for ESMs to capture this emergent behavior.

  15. The future of primordial features with large-scale structure surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xingang; Namjoo, Mohammad Hossein; Dvorkin, Cora

    2016-11-01

    Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial Universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopicmore » and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce the errors of the amplitudes of the features by a factor of 5 or more, including several interesting candidates identified in the recent Planck data. Therefore, LSS surveys offer an impressive opportunity for primordial feature discovery in the next decade or two. We also compare the advantages of both types of surveys.« less

  16. PIV Measurements of the Near-Wake behind a Fractal Tree

    NASA Astrophysics Data System (ADS)

    Bai, Kunlun; Meneveau, Charles; Katz, Joseph

    2010-11-01

    An experimental study of turbulent flow in the wake of a fractal-like tree has been carried out. Fractals provide the opportunity to study the interactions of flow with complicated, multiple-scale objects, yet whose geometric construction rules are simple. We consider a pre-fractal tree with five generations, with three branches and scale- reduction factor 1/2 at each generation. Its similarity fractal dimension is Ds˜1.585. Experiments are carried out in a water tunnel with the ability of index- matching, although current measurements do not utilize this capability yet. The incoming velocity profile is designed to mimic the velocity profile in a forest canopy. PIV measurements are carried out on 14 horizontal planes parallel to the bottom surface. Drag forces are measured using a load cell. Mean velocity and turbulence quantities are reported at various heights in the wake. Mean vorticity contours on the upper planes show signatures of the smaller branches, although the wakes from the smallest two branches are not visible in the data possibly due to rapid mixing. Interestingly, their signatures can be observed from the elevated spectra at small scales. Momentum deficit in the wake profiles and drag forces are compared. The results from this experiment also serve as database against which to compare computer simulations and models.

  17. Distinguishing between depression and anxiety: a proposal for an extension of the tripartite model.

    PubMed

    den Hollander-Gijsman, M E; de Beurs, E; van der Wee, N J A; van Rood, Y R; Zitman, F G

    2010-05-01

    The aim of the current study was to develop scales that assess symptoms of depression and anxiety and can adequately differentiate between depression and anxiety disorders, and also can distinguish within anxiety disorders. As point of departure, we used the tripartite model of Clark and Watson that discerns three dimensions: negative affect, positive affect and physiological hyperarousal. Analyses were performed on the data of 1449 patients, who completed the Mood and Anxiety Symptoms Questionnaire (MASQ) and the Brief Symptom Inventory (BSI). From this, 1434 patients were assessed with a standardized diagnostic interview. A model with five dimensions was found: depressed mood, lack of positive affect, somatic arousal, phobic fear and hostility. The scales appear capable to differentiate between patients with a mood and with an anxiety disorder. Within the anxiety disorders, somatic arousal was specific for patients with panic disorder. Phobic fear was associated with panic disorder, simple phobia and social anxiety disorder, but not with generalized anxiety disorder. We present a five-factor model as an extension of the tripartite model. Through the addition of phobic fear, anxiety is better represented than in the tripartite model. The new scales are capable to accurately differentiate between depression and anxiety disorders, as well as between several anxiety disorders. (c) 2009 Elsevier Masson SAS. All rights reserved.

  18. Single pass tangential flow filtration to debottleneck downstream processing for therapeutic antibody production.

    PubMed

    Dizon-Maspat, Jemelle; Bourret, Justin; D'Agostini, Anna; Li, Feng

    2012-04-01

    As the therapeutic monoclonal antibody (mAb) market continues to grow, optimizing production processes is becoming more critical in improving efficiencies and reducing cost-of-goods in large-scale production. With the recent trends of increasing cell culture titers from upstream process improvements, downstream capacity has become the bottleneck in many existing manufacturing facilities. Single Pass Tangential Flow Filtration (SPTFF) is an emerging technology, which is potentially useful in debottlenecking downstream capacity, especially when the pool tank size is a limiting factor. It can be integrated as part of an existing purification process, after a column chromatography step or a filtration step, without introducing a new unit operation. In this study, SPTFF technology was systematically evaluated for reducing process intermediate volumes from 2× to 10× with multiple mAbs and the impact of SPTFF on product quality, and process yield was analyzed. Finally, the potential fit into the typical 3-column industry platform antibody purification process and its implementation in a commercial scale manufacturing facility were also evaluated. Our data indicate that using SPTFF to concentrate protein pools is a simple, flexible, and robust operation, which can be implemented at various scales to improve antibody purification process capacity. Copyright © 2011 Wiley Periodicals, Inc.

  19. The "drinking-buddy" scale as a measure of para-social behavior.

    PubMed

    Powell, Larry; Richmond, Virginia P; Cantrell-Williams, Glenda

    2012-06-01

    Para-social behavior is a form of quasi-interpersonal behavior that results when audience members develop bonds with media personalities that can resemble interpersonal social interaction, but is not usually applied to political communication. This study tested whether the "Drinking-Buddy" Scale, a simple question frequently used in political communication, could be interpreted as a single-item measure of para-social behavior with respect to political candidates in terms of image judgments related to interpersonal attraction and perceived similarity to self. The participants were college students who had voted in the 2008 election. They rated the candidates, Obama or McCain, as drinking buddies and then rated the candidates' perceived similarity to themselves in attitude and background, and also the social and task attraction to the candidate. If the drinking-buddy rating serves as a proxy measure for para-social behavior, then it was expected that participants' ratings for all four kinds of similarity to and attraction toward a candidate would be higher for the candidate they chose as a drinking buddy. The directional hypotheses were supported for interpersonal attraction, but not for perceived similarity. These results indicate that the drinking-buddy scale predicts ratings of interpersonal attraction, while voters may view perceived similarity as an important but not essential factor in their candidate preference.

  20. Transverse-velocity scaling of femtoscopy in \\sqrt{s}=7\\,{TeV} proton–proton collisions

    NASA Astrophysics Data System (ADS)

    Humanic, T. J.

    2018-05-01

    Although transverse-mass scaling of femtoscopic radii is found to hold to a good approximation in heavy-ion collision experiments, it is seen to fail for high-energy proton–proton collisions. It is shown that if invariant radius parameters are plotted versus the transverse velocity instead, scaling with the transverse velocity is seen in \\sqrt{s}=7 TeV proton–proton experiments. A simple semi-classical model is shown to qualitatively reproduce this transverse velocity scaling.

  1. A universal preconditioner for simulating condensed phase materials.

    PubMed

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-28

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  2. A universal preconditioner for simulating condensed phase materials

    NASA Astrophysics Data System (ADS)

    Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor

    2016-04-01

    We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.

  3. Automatic recognition of light source from color negative films using sorting classification techniques

    NASA Astrophysics Data System (ADS)

    Sanger, Demas S.; Haneishi, Hideaki; Miyake, Yoichi

    1995-08-01

    This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.

  4. Small Scale Mass Flow Plug Calibration

    NASA Technical Reports Server (NTRS)

    Sasson, Jonathan

    2015-01-01

    A simple control volume model has been developed to calculate the discharge coefficient through a mass flow plug (MFP) and validated with a calibration experiment. The maximum error of the model in the operating region of the MFP is 0.54%. The model uses the MFP geometry and operating pressure and temperature to couple continuity, momentum, energy, an equation of state, and wall shear. Effects of boundary layer growth and the reduction in cross-sectional flow area are calculated using an in- integral method. A CFD calibration is shown to be of lower accuracy with a maximum error of 1.35%, and slower by a factor of 100. Effects of total pressure distortion are taken into account in the experiment. Distortion creates a loss in flow rate and can be characterized by two different distortion descriptors.

  5. Mean-field crack networks on desiccated films and their applications: Girl with a Pearl Earring.

    PubMed

    Flores, J C

    2017-02-15

    Usual requirements for bulk and fissure energies are considered in obtaining the interdependence among external stress, thickness and area of crack polygons in desiccated films. The average area of crack polygons increases with thickness as a power-law of 4/3. The sequential fragmentation process is characterized by a topological factor related to a scaling finite procedure. Non-sequential overly tensioned (prompt) fragmentation is briefly discussed. Vermeer's painting, Girl with a Pearl Earring, is considered explicitly by using computational image tools and simple experiments and applying the proposed theoretical analysis. In particular, concerning the source of lightened effects on the girl's face, the left/right thickness layer ratio (≈1.34) and the stress ratio (≈1.102) are evaluated. Other master paintings are briefly considered.

  6. A quantitative theory of the Hounsfield unit and its application to dual energy scanning.

    PubMed

    Brooks, R A

    1977-10-01

    A standard definition is proposed for the Hounsfield number. Any number in computed tomography can be converted to the Hounsfield scale after performing a simple calibration using air and water. The energy dependence of the Hounsfield number, H, is given by the expression H = (Hc + Hp Q)/(1 + Q), where Hc and Hp are the Compton and photoelectric coefficients of the material being measured, expressed in Hounsfield units, and Q is the "quality factor" of the scanner. Q can be measured by performing a scan of a single calibrating material, such as a potassium iodine solution. By applying this analysis to dual energy scans, the Compton and photoelectric coefficients of an unknown substance may easily be obtained. This can lead to a limited degree of chemical identification.

  7. [Empathy-related factors in Nursing students of the Cartagena University].

    PubMed

    Madera-Anaya, Meisser; Tirado-Amador, Lesbia; González-Martínez, Farith

    2016-01-01

    To determine empathy levels and its relationship with sociodemographic, academic and family factors in nursing students. Cross-sectional study, 196 nursing students were randomly selected at the University of Cartagena, Colombia. A questionnaire that asked about sociodemographic, family and academic factors and the Scale of Physician Empathy Jefferson-version S were applied. Shapiro-Wilk test was used to assess the normality assumption. t Student, ANOVA, Pearson test and simple linear regression were used to establish the relationship (p<0.05). The global empathy score was 108.6±14.6; statistically significant associations between global empathy with the training year (p=0.004) and grade point average (R(2)=0.058; p=0.001; r=0.240) were found. Moreover, the "perspective taking" dimension with provenance (rural/urban) (p=0.010) and family functioning (p=0.003); the "compassionate care" dimension with the training year (p=0.002) and the "putting themselves in the place of the patient" dimension with academic performance (p=0.034). The empathy levels in nursing students may vary depending on various personal and academic factors,these characteristics should be taken into account for implementing teaching strategies to promote higher empathy levels since the early training years. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.

  8. Time rescaling reproduces EEG behavior during transition from propofol anesthesia-induced unconsciousness to consciousness.

    PubMed

    Boussen, S; Spiegler, A; Benar, C; Carrère, M; Bartolomei, F; Metellus, P; Voituriez, R; Velly, L; Bruder, N; Trébuchon, A

    2018-04-16

    General anesthesia (GA) is a reversible manipulation of consciousness whose mechanism is mysterious at the level of neural networks leaving space for several competing hypotheses. We recorded electrocorticography (ECoG) signals in patients who underwent intracranial monitoring during awake surgery for the treatment of cerebral tumors in functional areas of the brain. Therefore, we recorded the transition from unconsciousness to consciousness directly on the brain surface. Using frequency resolved interferometry; we studied the intermediate ECoG frequencies (4-40 Hz). In the theoretical study, we used a computational Jansen and Rit neuron model to simulate recovery of consciousness (ROC). During ROC, we found that f increased by a factor equal to 1.62 ± 0.09, and δf varied by the same factor (1.61 ± 0.09) suggesting the existence of a scaling factor. We accelerated the time course of an unconscious EEG trace by an approximate factor 1.6 and we showed that the resulting EEG trace match the conscious state. Using the theoretical model, we successfully reproduced this behavior. We show that the recovery of consciousness corresponds to a transition in the frequency (f, δf) space, which is exactly reproduced by a simple time rescaling. These findings may perhaps be applied to other altered consciousness states.

  9. A Simple Model for Fine Structure Transitions in Alkali-Metal Noble-Gas Collisions

    DTIC Science & Technology

    2015-03-01

    63 33 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for KHe, KNe, and KAr...64 ix Figure Page 34 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for RbHe, RbNe, and...RbAr . . . . . . . . . . . . . . . . . . . . . . . . . 64 35 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for CsHe, CsNe, and CsAr

  10. Sodium-cutting: a new top-down approach to cut open nanostructures on nonplanar surfaces on a large scale.

    PubMed

    Chen, Wei; Deng, Da

    2014-11-11

    We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.

  11. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  12. Scaling laws and fluctuations in the statistics of word frequencies

    NASA Astrophysics Data System (ADS)

    Gerlach, Martin; Altmann, Eduardo G.

    2014-11-01

    In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.

  13. Structural Solutions for Low-Cost Bamboo Frames: Experimental Tests and Constructive Assessments

    PubMed Central

    Sassu, Mauro; De Falco, Anna; Giresini, Linda; Puppio, Mario Lucio

    2016-01-01

    Experimental tests and constructive assessments are presented for a simple bamboo framed structure with innovative low-cost and low technology joints, specifically conceived for small buildings in developing countries. Two full scale one-storey bamboo frames have been designed by using the simplest joints solution among three different tested typologies. The entire building process is based on low-technology and natural materials: bamboo canes, wooden cylinders, plywood plates and canapé rods. The first full scale specimen (Unit A) is a one-storey single deck truss structure subjected to monotonic collapse test; the second full scale specimen (Unit B) is a one-storey double deck truss structure used to evaluate the construction time throughout assembling tests. The first full scale specimen showed ductility in collapse and ease in strengthening; the second one showed remarkable ease and speed in assembling structural elements. Finally several constructive solutions are suggested for the design of simple one-storey buildings; they are addressed to four purposes (housing, school, chapel, health center) by the composition of the proposed full scale bamboo frames. Ease of use and maintenance with a low level of technology contribute to application in developing countries although not exclusively. PMID:28773472

  14. Wettability and Contact Time on a Biomimetic Superhydrophobic Surface.

    PubMed

    Liang, Yunhong; Peng, Jian; Li, Xiujuan; Huang, Jubin; Qiu, Rongxian; Zhang, Zhihui; Ren, Luquan

    2017-03-02

    Inspired by the array microstructure of natural superhydrophobic surfaces (lotus leaf and cicada wing), an array microstructure was successfully constructed by high speed wire electrical discharge machining (HS-WEDM) on the surfaces of a 7075 aluminum alloy without any chemical treatment. The artificial surfaces had a high apparent contact angle of 153° ± 1° with a contact angle hysteresis less than 5° and showed a good superhydrophobic property. Wettability, contact time, and the corresponding superhydrophobic mechanism of artificial superhydrophobic surface were investigated. The results indicated that the micro-scale array microstructure was an important factor for the superhydrophobic surface, while different array microstructures exhibited different effects on the wettability and contact time of the artificial superhydrophobic surface. The length ( L ), interval ( S ), and height ( H ) of the array microstructure are the main influential factors on the wettability and contact time. The order of importance of these factors is H > S > L for increasing the apparent contact angle and reducing the contact time. The method, using HS-WEDM to fabricate superhydrophobic surface, is simple, low-cost, and environmentally friendly and can easily control the wettability and contact time on the artificial surfaces by changing the array microstructure.

  15. Wettability and Contact Time on a Biomimetic Superhydrophobic Surface

    PubMed Central

    Liang, Yunhong; Peng, Jian; Li, Xiujuan; Huang, Jubin; Qiu, Rongxian; Zhang, Zhihui; Ren, Luquan

    2017-01-01

    Inspired by the array microstructure of natural superhydrophobic surfaces (lotus leaf and cicada wing), an array microstructure was successfully constructed by high speed wire electrical discharge machining (HS-WEDM) on the surfaces of a 7075 aluminum alloy without any chemical treatment. The artificial surfaces had a high apparent contact angle of 153° ± 1° with a contact angle hysteresis less than 5° and showed a good superhydrophobic property. Wettability, contact time, and the corresponding superhydrophobic mechanism of artificial superhydrophobic surface were investigated. The results indicated that the micro-scale array microstructure was an important factor for the superhydrophobic surface, while different array microstructures exhibited different effects on the wettability and contact time of the artificial superhydrophobic surface. The length (L), interval (S), and height (H) of the array microstructure are the main influential factors on the wettability and contact time. The order of importance of these factors is H > S > L for increasing the apparent contact angle and reducing the contact time. The method, using HS-WEDM to fabricate superhydrophobic surface, is simple, low-cost, and environmentally friendly and can easily control the wettability and contact time on the artificial surfaces by changing the array microstructure. PMID:28772613

  16. Winter wheat quality monitoring and forecasting system based on remote sensing and environmental factors

    NASA Astrophysics Data System (ADS)

    Haiyang, Yu; Yanmei, Liu; Guijun, Yang; Xiaodong, Yang; Dong, Ren; Chenwei, Nie

    2014-03-01

    To achieve dynamic winter wheat quality monitoring and forecasting in larger scale regions, the objective of this study was to design and develop a winter wheat quality monitoring and forecasting system by using a remote sensing index and environmental factors. The winter wheat quality trend was forecasted before the harvest and quality was monitored after the harvest, respectively. The traditional quality-vegetation index from remote sensing monitoring and forecasting models were improved. Combining with latitude information, the vegetation index was used to estimate agronomy parameters which were related with winter wheat quality in the early stages for forecasting the quality trend. A combination of rainfall in May, temperature in May, illumination at later May, the soil available nitrogen content and other environmental factors established the quality monitoring model. Compared with a simple quality-vegetation index, the remote sensing monitoring and forecasting model used in this system get greatly improved accuracy. Winter wheat quality was monitored and forecasted based on the above models, and this system was completed based on WebGIS technology. Finally, in 2010 the operation process of winter wheat quality monitoring system was presented in Beijing, the monitoring and forecasting results was outputted as thematic maps.

  17. Association between cumulative social risk and ideal cardiovascular health in US adults: NHANES 1999-2006.

    PubMed

    Caleyachetty, Rishi; Echouffo-Tcheugui, Justin B; Muennig, Peter; Zhu, Wenyi; Muntner, Paul; Shimbo, Daichi

    2015-07-15

    The American Heart Association developed the Life's Simple 7 metric for defining cardiovascular health. Little is known about the association of co-occurring social risk factors on ideal cardiovascular health. Using data on 11,467 adults aged ≥25 years from the National Health and Nutrition Examination Survey 1999-2006, we examined the association between cumulative social risk and ideal cardiovascular health in US adults. A cumulative risk score (range 0 to 3 or 4) was created by summing four social risk factors (low family income, low education level, minority race, and single-living status). Ideal levels for each component in Life's Simple 7 (blood pressure, cholesterol, glucose, BMI, smoking, physical activity, and diet) were used to create an ideal Life's Simple 7 score [0-1 (low), 2, 3, 4, and 5-7 (high)]. Adults with low income (odds ratio [OR]=0.30 [95% CI 0.23-0.39]), low education [0.22 (0.16-0.28)], who are non-white (0.44 [0.36-0.54]) and single-living [0.79 (0.67-0.95)] were less likely to have 5-7 versus 0 ideal Life's Simple 7 scores after adjustment for age and sex. Adults were less likely to attain 5-7 versus 0 ideal Life's Simple 7 scores as exposure to the number of social risk factors increased [OR (95% CI) of 0.58 (0.49-0.68); 0.27 (0.21-0.35); and 0.19 (0.14-0.27) for cumulative social risk scores of 1, 2, and 3 or 4, respectively, each versus 0]. US adults with an increasing number of socially risk factors, were progressively less likely to attain ideal levels of cardiovascular health factors. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    PubMed

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  19. Evaporation estimation of rift valley lakes: comparison of models.

    PubMed

    Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe

    2009-01-01

    Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  20. Refining and validating the Social Interaction Anxiety Scale and the Social Phobia Scale.

    PubMed

    Carleton, R Nicholas; Collimore, Kelsey C; Asmundson, Gordon J G; McCabe, Randi E; Rowa, Karen; Antony, Martin M

    2009-01-01

    The Social Interaction Anxiety Scale and Social Phobia Scale are companion measures for assessing symptoms of social anxiety and social phobia. The scales have good reliability and validity across several samples, however, exploratory and confirmatory factor analyses have yielded solutions comprising substantially different item content and factor structures. These discrepancies are likely the result of analyzing items from each scale separately or simultaneously. The current investigation sets out to assess items from those scales, both simultaneously and separately, using exploratory and confirmatory factor analyses in an effort to resolve the factor structure. Participants consisted of a clinical sample (n 5353; 54% women) and an undergraduate sample (n 5317; 75% women) who completed the Social Interaction Anxiety Scale and Social Phobia Scale, along with additional fear-related measures to assess convergent and discriminant validity. A three-factor solution with a reduced set of items was found to be most stable, irrespective of whether the items from each scale are assessed together or separately. Items from the Social Interaction Anxiety Scale represented one factor, whereas items from the Social Phobia Scale represented two other factors. Initial support for scale and factor validity, along with implications and recommendations for future research, is provided. (c) 2009 Wiley-Liss, Inc.

  1. A simple algorithm for large-scale mapping of evergreen forests in tropical America, Africa and Asia

    Treesearch

    Xiangming Xiao; Chandrashekhar M. Biradar; Christina Czarnecki; Tunrayo Alabi; Michael Keller

    2009-01-01

    The areal extent and spatial distribution of evergreen forests in the tropical zones are important for the study of climate, carbon cycle and biodiversity. However, frequent cloud cover in the tropical regions makes mapping evergreen forests a challenging task. In this study we developed a simple and novel mapping algorithm that is based on the temporal profile...

  2. Testing scale-dependent effects of seminatural habitats on farmland biodiversity.

    PubMed

    Dainese, Matteo; Luna, Diego Inclán; Sitzia, Tommaso; Marini, Lorenzo

    2015-09-01

    The effectiveness of conservation interventions for maximizing biodiversity benefits from agri-environment schemes (AESs) is expected to depend on the quantity of seminatural habitats in the surrounding landscape. To verify this hypothesis, we developed a hierarchical sampling design to assess the effects of field boundary type and cover of seminatural habitats in the landscape at two nested spatial scales. We sampled three types of field boundaries with increasing structural complexity (grass margin, simple hedgerow, complex hedgerow) in paired landscapes with the presence or absence of seminatural habitats (radius 0.5 km), that in turn, were nested within 15 areas with different proportions of seminatural habitats at a larger spatial scale (10 X 10 km). Overall, 90 field boundaries were sampled across a Mediterranean'region (northeastern Italy). We considered species richness response across three different taxonomic groups: vascular plants, butterflies, and tachinid flies. No interactions between type of field boundary and surrounding landscape were found at either 0.5 and 10 km, indicating that the quality of field boundary had the same effect irrespective of the cover of seminatural habitats. At the local scale, extended-width grass margins yielded higher plant species richness, while hedgerows yielded higher species richness of butterflies and tachinids. At the 0.5-km landscape scale, the effect of the proportion of seminatural habitats was neutral for plants and tachinids, while butterflies were positively related to the proportion of forest. At the 10-km landscape scale, only butterflies responded positively to the proportion of seminatural habitats. Our study confirmed the importance of testing multiple scales when considering species from different taxa and with different mobility. We showed that the quality of field boundaries at the local scale was an important factor in enhancing farmland biodiversity. For butterflies, AESs should focus particular attention on preservation'of forest patches in agricultural landscapes within 0.5 kin, as well as the conservation of seminatural habitats at a wider landscape scale.

  3. SU-F-R-33: Can CT and CBCT Be Used Simultaneously for Radiomics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, R; Wang, J; Zhong, H

    2016-06-15

    Purpose: To investigate whether CBCT and CT can be used in radiomics analysis simultaneously. To establish a batch correction method for radiomics in two similar image modalities. Methods: Four sites including rectum, bladder, femoral head and lung were considered as region of interest (ROI) in this study. For each site, 10 treatment planning CT images were collected. And 10 CBCT images which came from same site of same patient were acquired at first radiotherapy fraction. 253 radiomics features, which were selected by our test-retest study at rectum cancer CT (ICC>0.8), were calculated for both CBCT and CT images in MATLAB.more » Simple scaling (z-score) and nonlinear correction methods were applied to the CBCT radiomics features. The Pearson Correlation Coefficient was calculated to analyze the correlation between radiomics features of CT and CBCT images before and after correction. Cluster analysis of mixed data (for each site, 5 CT and 5 CBCT data are randomly selected) was implemented to validate the feasibility to merge radiomics data from CBCT and CT. The consistency of clustering result and site grouping was verified by a chi-square test for different datasets respectively. Results: For simple scaling, 234 of the 253 features have correlation coefficient ρ>0.8 among which 154 features haveρ>0.9 . For radiomics data after nonlinear correction, 240 of the 253 features have ρ>0.8 among which 220 features have ρ>0.9. Cluster analysis of mixed data shows that data of four sites was almost precisely separated for simple scaling(p=1.29 * 10{sup −7}, χ{sup 2} test) and nonlinear correction (p=5.98 * 10{sup −7}, χ{sup 2} test), which is similar to the cluster result of CT data (p=4.52 * 10{sup −8}, χ{sup 2} test). Conclusion: Radiomics data from CBCT can be merged with those from CT by simple scaling or nonlinear correction for radiomics analysis.« less

  4. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    NASA Astrophysics Data System (ADS)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone. Bayesian inversion is then applied to assign scaling factors that align the surface fluxes with the CO2 time series. Our project demonstrates how bottom-up and top-down techniques can be reconciled to arrive at a more robust and balanced spatial carbon budget. We will show how to evaluate existing flux products through regionally representative atmospheric observations, i.e. how well the underlying model assumptions represent processes on the regional scale. Adapting process model parameterizations sets for e.g. sub-regions, disturbance regimes, or land cover classes, in order to optimize the agreement between surface fluxes and atmospheric observations can lead to improved understanding of the underlying flux mechanisms, and reduces uncertainties in the regional carbon budgets.

  5. A Lattice Boltzmann Method for Turbomachinery Simulations

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Lopez, I.

    2003-01-01

    Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.

  6. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  7. An analytical approach to separate climate and human contributions to basin streamflow variability

    NASA Astrophysics Data System (ADS)

    Li, Changbin; Wang, Liuming; Wanrui, Wang; Qi, Jiaguo; Linshan, Yang; Zhang, Yuan; Lei, Wu; Cui, Xia; Wang, Peng

    2018-04-01

    Climate variability and anthropogenic regulations are two interwoven factors in the ecohydrologic system across large basins. Understanding the roles that these two factors play under various hydrologic conditions is of great significance for basin hydrology and sustainable water utilization. In this study, we present an analytical approach based on coupling water balance method and Budyko hypothesis to derive effectiveness coefficients (ECs) of climate change, as a way to disentangle contributions of it and human activities to the variability of river discharges under different hydro-transitional situations. The climate dominated streamflow change (ΔQc) by EC approach was compared with those deduced by the elasticity method and sensitivity index. The results suggest that the EC approach is valid and applicable for hydrologic study at large basin scale. Analyses of various scenarios revealed that contributions of climate change and human activities to river discharge variation differed among the regions of the study area. Over the past several decades, climate change dominated hydro-transitions from dry to wet, while human activities played key roles in the reduction of streamflow during wet to dry periods. Remarkable decline of discharge in upstream was mainly due to human interventions, although climate contributed more to runoff increasing during dry periods in the semi-arid downstream. Induced effectiveness on streamflow changes indicated a contribution ratio of 49% for climate and 51% for human activities at the basin scale from 1956 to 2015. The mathematic derivation based simple approach, together with the case example of temporal segmentation and spatial zoning, could help people understand variation of river discharge with more details at a large basin scale under the background of climate change and human regulations.

  8. The development and validation of the Physical Appearance Comparison Scale-3 (PACS-3).

    PubMed

    Schaefer, Lauren M; Thompson, J Kevin

    2018-05-21

    Appearance comparison processes are implicated in the development of body-image disturbance and disordered eating. The Physical Appearance Comparison Scale-Revised (PACS-R) assesses the simple frequency of appearance comparisons; however, research has suggested that other aspects of appearance comparisons (e.g., comparison direction) may moderate the association between comparisons and their negative outcomes. In the current study, the PACS-R was revised to examine aspects of comparisons with relevance to body-image and eating outcomes. Specifically, the measure was modified to examine (a) dimensions of physical appearance relevant to men and women (i.e., weight-shape, muscularity, and overall physical appearance), (b) comparisons with proximal and distal targets, (c) upward versus downward comparisons, and (d) the acute emotional impact of comparisons. The newly revised measure, labeled the PACS-3, along with existing measures of appearance comparison, body satisfaction, eating pathology, and self-esteem, was completed by 1,533 college men and women. Exploratory and confirmatory factor analyses were conducted to examine the factor structure of the PACS-3. In addition, the reliability, convergent validity, and incremental validity of the PACS-3 scores were examined. The final PACS-3 comprises 27 items and 9 subscales: Proximal: Frequency, Distal: Frequency, Muscular: Frequency, Proximal: Direction, Distal: Direction, Muscular: Direction, Proximal: Effect, Distal: Effect, and Muscular: Effect. the PACS-3 subscale scores demonstrated good reliability and convergent validity. Moreover, the PACS-3 subscales greatly improved the prediction of body satisfaction and disordered eating relative to existing measures of appearance comparison. Overall, the PACS-3 improves upon existing scales and offers a comprehensive assessment of appearance-comparison processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. A new prognostic model identifies patients aged 80 years and older with diffuse large B-cell lymphoma who may benefit from curative treatment: A multicenter, retrospective analysis by the Spanish GELTAMO group.

    PubMed

    Pardal, Emilia; Díez Baeza, Eva; Salas, Queralt; García, Tomás; Sancho, Juan M; Monzón, Encarna; Moraleda, José M; Córdoba, Raúl; de la Cruz, Fátima; Queizán, José A; Rodríguez, María J; Navarro, Belén; Hernández, José A; Díez, Rosana; Vahi, María; Viguria, María C; Canales, Miguel; Peñarrubia, María J; González-López, Tomás J; Montes-Moreno, Santiago; González-Barca, Eva; Caballero, Dolores; Martín, Alejandro

    2018-04-15

    The means of optimally managing very elderly patients with diffuse large B-cell lymphoma (DLBCL) has not been established. We retrospectively analyzed 252 patients aged 80-100 years, diagnosed with DLBCL or grade 3B follicular lymphoma, treated in 19 hospitals from the GELTAMO group. Primary objective was to analyze the influence of the type of treatment and comorbidity scales on progression-free survival (PFS) and overall survival (OS). One hundred sixty-three patients (63%) were treated with chemotherapy that included anthracyclines and/or rituximab, whereas 15% received no chemotherapeutic treatment. With a median follow-up of 44 months, median PFS and OS were 9.5 and 12.5 months, respectively. In an analysis restricted to the 205 patients treated with any kind of chemotherapy, comorbidity scales did not influence the choice of treatment type significantly. Independent factors associated with better PFS and OS were: age < 86 years, cumulative illness rating scale (CIRS) score < 6, intermediate risk (1-2) R-IPI, and treatment with R-CHOP at full or reduced doses. We developed a prognostic model based on the multivariate analysis of the 108 patients treated with R-CHOP-like: median OS was 45 vs. 12 months (P = .001), respectively, for patients with 0-1 vs. 2-3 risk factors (age > 85 years, R-IPI 3-5 or CIRS > 5). In conclusion, treatment with R-CHOP-like is associated with good survival in a significant proportion of patients. We have developed a simple prognostic model that may aid the selection patients who could benefit from a curative treatment, although it needs to be validated in larger series. © 2018 Wiley Periodicals, Inc.

  10. A robust data scaling algorithm to improve classification accuracies in biomedical data.

    PubMed

    Cao, Xi Hang; Stojkovic, Ivan; Obradovic, Zoran

    2016-09-09

    Machine learning models have been adapted in biomedical research and practice for knowledge discovery and decision support. While mainstream biomedical informatics research focuses on developing more accurate models, the importance of data preprocessing draws less attention. We propose the Generalized Logistic (GL) algorithm that scales data uniformly to an appropriate interval by learning a generalized logistic function to fit the empirical cumulative distribution function of the data. The GL algorithm is simple yet effective; it is intrinsically robust to outliers, so it is particularly suitable for diagnostic/classification models in clinical/medical applications where the number of samples is usually small; it scales the data in a nonlinear fashion, which leads to potential improvement in accuracy. To evaluate the effectiveness of the proposed algorithm, we conducted experiments on 16 binary classification tasks with different variable types and cover a wide range of applications. The resultant performance in terms of area under the receiver operation characteristic curve (AUROC) and percentage of correct classification showed that models learned using data scaled by the GL algorithm outperform the ones using data scaled by the Min-max and the Z-score algorithm, which are the most commonly used data scaling algorithms. The proposed GL algorithm is simple and effective. It is robust to outliers, so no additional denoising or outlier detection step is needed in data preprocessing. Empirical results also show models learned from data scaled by the GL algorithm have higher accuracy compared to the commonly used data scaling algorithms.

  11. Spatiotemporal variations in litter mass and their relationships with climate in temperate grassland: A case study from Xilingol grassland, Inner Mongolia (China)

    NASA Astrophysics Data System (ADS)

    Ren, Hongrui; Zhang, Bei

    2018-02-01

    Clarifying spatiotemporal variations of litter mass and their relationships with climate factors will advance our understanding of ecosystem structure and functioning in grasslands. Our objective is to investigate the spatiotemporal variations of litter mass in the growing season and their relationships with precipitation and temperature in the Xilingol grassland using MOD09A1 data. With widely used STI (simple tillage index), we firstly estimated the litter mass of Xilingol grassland in the growing season from 2000 to 2014. Then we investigated the variations of litter mass in the growing season at regional and site scales. We further explored the spatiotemporal relationships between litter mass and precipitation and temperature at both scales. The litter mass increased with increasing mean annual precipitation and decreasing mean annual temperature at regional scale. The variations of litter mass at given sites followed quadratic function curves in the growing season, and litter mass generally attained maximums between August 1 and September 1. Positive spatial relationship was observed between litter mass variations and precipitation, and negative spatial relationship was found between litter mass variations and temperature in the growing season. There was no significant relationship between inter-annual variations of litter mass and precipitation and temperature at given sites. Results illustrate that precipitation and temperature are important drivers in shaping ecosystem functioning as reflected in litter mass at regional scale in the Xilingol grassland. Our findings also suggest the action of distinct mechanism in controlling litter mass variations at regional and sites scales.

  12. Facilitators and barriers to effective scale-up of an evidence-based multilevel HIV prevention intervention.

    PubMed

    Kegeles, Susan M; Rebchook, Gregory; Tebbetts, Scott; Arnold, Emily

    2015-04-17

    Since the scale-up of HIV/AIDS prevention evidence-based interventions (EBIs) has not been simple, it is important to examine processes that occur in the translation of the EBIs into practice that affect successful implementation. The goal of this paper is to examine facilitators and barriers to effective implementation that arose among 72 community-based organizations as they moved into practice a multilevel HIV prevention intervention EBI, the Mpowerment Project, for young gay and bisexual men. CBOs that were implementing the Mpowerment Project participated in this study and were assessed at baseline, and 6-months, 1 year, and 2 years post-baseline. Semi-structured telephone interviews were conducted separately with individuals at each CBO. Study data came from 647 semi-structured interviews and extensive notes and commentaries from technical assistance providers. Framework Analysis guided the analytic process. Barriers and facilitators to implementation was the overarching thematic framework used across all the cases in our analysis. Thirteen themes emerged regarding factors that influence the successful implementation of the MP. These were organized into three overarching themes: HIV Prevention System Factors, Community Factors, and Intervention Factors. The entire HIV Prevention System, including coordinators, supervisors, executive directors, funders, and national HIV prevention policies, all influenced implementation success. Other Prevention System Factors that affected the effective translation of the EBI into practice include Knowledge About Intervention, Belief in the Efficacy of the Intervention, Desire to Change Existing Prevention Approach, Planning for Intervention Before Implementation, Accountability, Appropriateness of Individuals for Coordinator Positions, Evaluation of Intervention, and Organizational Stability. Community Factors included Geography and Sociopolitical Climate. Intervention Factors included Intervention Characteristics and Adaptation Issues. The entire ecological system in which an EBI occurs affects implementation. It is imperative to focus capacity-building efforts on getting individuals at different levels of the HIV Prevention System into alignment regarding understanding and believing in the program's goals and methods. For a Prevention Support System to be maximally useful, it must address facilitators or barriers to implementation, address the right people, and use modalities to convey information that are acceptable for users of the system.

  13. Clock Drawing Test and the diagnosis of amnestic mild cognitive impairment: can more detailed scoring systems do the work?

    PubMed

    Rubínová, Eva; Nikolai, Tomáš; Marková, Hana; Siffelová, Kamila; Laczó, Jan; Hort, Jakub; Vyhnálek, Martin

    2014-01-01

    The Clock Drawing Test is a frequently used cognitive screening test with several scoring systems in elderly populations. We compare simple and complex scoring systems and evaluate the usefulness of the combination of the Clock Drawing Test with the Mini-Mental State Examination to detect patients with mild cognitive impairment. Patients with amnestic mild cognitive impairment (n = 48) and age- and education-matched controls (n = 48) underwent neuropsychological examinations, including the Clock Drawing Test and the Mini-Mental State Examination. Clock drawings were scored by three blinded raters using one simple (6-point scale) and two complex (17- and 18-point scales) systems. The sensitivity and specificity of these scoring systems used alone and in combination with the Mini-Mental State Examination were determined. Complex scoring systems, but not the simple scoring system, were significant predictors of the amnestic mild cognitive impairment diagnosis in logistic regression analysis. At equal levels of sensitivity (87.5%), the Mini-Mental State Examination showed higher specificity (31.3%, compared with 12.5% for the 17-point Clock Drawing Test scoring scale). The combination of Clock Drawing Test and Mini-Mental State Examination scores increased the area under the curve (0.72; p < .001) and increased specificity (43.8%), but did not increase sensitivity, which remained high (85.4%). A simple 6-point scoring system for the Clock Drawing Test did not differentiate between healthy elderly and patients with amnestic mild cognitive impairment in our sample. Complex scoring systems were slightly more efficient, yet still were characterized by high rates of false-positive results. We found psychometric improvement using combined scores from the Mini-Mental State Examination and the Clock Drawing Test when complex scoring systems were used. The results of this study support the benefit of using combined scores from simple methods.

  14. Coherent Transition Radiation Generated from Transverse Electron Density Modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halavanau, A.; Piot, P.; Tyukhtin, A. V.

    Coherent Transition radiation (CTR) of a given frequency is commonly generated with longitudinal electron bunch trains. In this paper, we present a study of CTR properties produced from simultaneous electron transverse and longitudinal density modulation. We demonstrate via numerical simulations a simple technique to generate THz-scale frequencies from mm-scale transversely separated electron beamlets formed into a ps-scale bunch train. The results and a potential experimental setup are discussed.

  15. Toward Active Control of Noise from Hot Supersonic Jets

    DTIC Science & Technology

    2013-11-15

    several laboratory - and full- scale data sets. Two different scaling scenarios are presented for the practising scientist to choose from. The first...As will be detailed below, this simple proof-of-concept experiment yielded good quality data that reveals details about the large-scale 3D structure...the light-field. Co-PI Thurow has recently designed and assembled a plenoptic camera in his laboratory with its key attributes being its compact

  16. Heisenberg scaling with weak measurement: a quantum state discrimination point of view

    DTIC Science & Technology

    2015-03-18

    a quantum state discrimination point of view. The Heisenberg scaling of the photon number for the precision of the interaction parameter between...coherent light and a spin one-half particle (or pseudo-spin) has a simple interpretation in terms of the interaction rotating the quantum state to an...release; distribution is unlimited. Heisenberg scaling with weak measurement: a quantum state discrimination point of view The views, opinions and/or

  17. The Field Assessment Stroke Triage for Emergency Destination (FAST-ED): a Simple and Accurate Pre-Hospital Scale to Detect Large Vessel Occlusion Strokes

    PubMed Central

    Lima, Fabricio O.; Silva, Gisele S.; Furie, Karen L.; Frankel, Michael R.; Lev, Michael H.; Camargo, Érica CS; Haussen, Diogo C.; Singhal, Aneesh B.; Koroshetz, Walter J.; Smith, Wade S.; Nogueira, Raul G.

    2016-01-01

    Background and Purpose Patients with large vessel occlusion strokes (LVOS) may be better served by direct transfer to endovascular capable centers avoiding hazardous delays between primary and comprehensive stroke centers. However, accurate stroke field triage remains challenging. We aimed to develop a simple field scale to identify LVOS. Methods The FAST-ED scale was based on items of the NIHSS with higher predictive value for LVOS and tested in the STOPStroke cohort, in which patients underwent CT angiography within the first 24 hours of stroke onset. LVOS were defined by total occlusions involving the intracranial-ICA, MCA-M1, MCA-2, or basilar arteries. Patients with partial, bi-hemispheric, and/or anterior + posterior circulation occlusions were excluded. Receiver operating characteristic (ROC) curve, sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of FAST-ED were compared with the NIHSS, Rapid Arterial oCclusion Evaluation (RACE) scale and Cincinnati Prehospital Stroke Severity Scale (CPSSS). Results LVO was detected in 240 of the 727 qualifying patients (33%). FAST-ED had comparable accuracy to predict LVO to the NIHSS and higher accuracy than RACE and CPSS (area under the ROC curve: FAST-ED=0.81 as reference; NIHSS=0.80, p=0.28; RACE=0.77, p=0.02; and CPSS=0.75, p=0.002). A FAST-ED ≥4 had sensitivity of 0.60, specificity 0.89, PPV 0.72, and NPV 0.82 versus RACE ≥5 of 0.55, 0.87, 0.68, 0.79 and CPSS ≥2 of 0.56, 0.85, 0.65, 0.78, respectively. Conclusions FAST-ED is a simple scale that if successfully validated in the field may be used by medical emergency professionals to identify LVOS in the pre-hospital setting enabling rapid triage of patients. PMID:27364531

  18. Graphene oxide as a p-dopant and an anti-reflection coating layer, in graphene/silicon solar cells

    NASA Astrophysics Data System (ADS)

    Yavuz, S.; Kuru, C.; Choi, D.; Kargar, A.; Jin, S.; Bandaru, P. R.

    2016-03-01

    It is shown that coating graphene-silicon (Gr/Si) Schottky junction based solar cells with graphene oxide (GO) improves the power conversion efficiency (PCE) of the cells, while demonstrating unprecedented device stability. The PCE has been shown to be increased to 10.6% (at incident radiation of 100 mW cm-2) for the Gr/Si solar cell with an optimal GO coating thickness compared to 3.6% for a bare/uncoated Gr/Si solar cell. The p-doping of graphene by the GO, which also serves as an antireflection coating (ARC) has been shown to be a main contributing factor to the enhanced PCE. A simple spin coating process has been used to apply GO with thickness commensurate with an anti-refection coating (ARC) and indicates the suitability of the developed methodology for large-scale solar cell assembly.It is shown that coating graphene-silicon (Gr/Si) Schottky junction based solar cells with graphene oxide (GO) improves the power conversion efficiency (PCE) of the cells, while demonstrating unprecedented device stability. The PCE has been shown to be increased to 10.6% (at incident radiation of 100 mW cm-2) for the Gr/Si solar cell with an optimal GO coating thickness compared to 3.6% for a bare/uncoated Gr/Si solar cell. The p-doping of graphene by the GO, which also serves as an antireflection coating (ARC) has been shown to be a main contributing factor to the enhanced PCE. A simple spin coating process has been used to apply GO with thickness commensurate with an anti-refection coating (ARC) and indicates the suitability of the developed methodology for large-scale solar cell assembly. Electronic supplementary information (ESI) available: (i) Experimental methods, (ii) optical images of devices with and without graphene oxide (GO), (iii) comparison of the power conversion efficiency (PCE) due to the GO coating and nitric acid doping, (iv) specular and diffuse reflectance measurements, (v) stability data of pristine graphene/silicon (Gr/Si) solar cells. See DOI: 10.1039/c5nr09143h

  19. Using Four Downscaling Techniques to Characterize Uncertainty in Updating Intensity-Duration-Frequency Curves Under Climate Change

    NASA Astrophysics Data System (ADS)

    Cook, L. M.; Samaras, C.; McGinnis, S. A.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.

  20. Vortex breakdown in simple pipe bends

    NASA Astrophysics Data System (ADS)

    Ault, Jesse; Shin, Sangwoo; Stone, Howard

    2016-11-01

    Pipe bends and elbows are one of the most common fluid mechanics elements that exists. However, despite their ubiquity and the extensive amount of research related to these common, simple geometries, unexpected complexities still remain. We show that for a range of geometries and flow conditions, these simple flows experience unexpected fluid dynamical bifurcations resembling the bubble-type vortex breakdown phenomenon. Specifically, we show with simulations and experiments that recirculation zones develop within the bends under certain conditions. As a consequence, fluid and particles can remain trapped within these structures for unexpectedly-long time scales. We also present simple techniques to mitigate this recirculation effect which can potentially have impact across industries ranging from biomedical and chemical processing to food and health sciences.

Top