Science.gov

Sample records for 4-point likert-type scale

  1. Response-Order Effects in Likert-Type Scales.

    ERIC Educational Resources Information Center

    Chan, Jason C.

    1991-01-01

    A study involving 102 high school students (49 males and 53 females) from Taiwan revealed that the order of response scale labels had a primacy effect on subjects' choices of the alternatives in Likert-type attitude scales. Practical implications of the response-order effects for measurement are discussed. (SLD)

  2. Response-Order Effect in Likert-Type Scales.

    ERIC Educational Resources Information Center

    Chan, Jason C.

    The importance of the presentation order of items on Likert-type scales was studied. It was proposed that subjects tend to choose the first alternative acceptable to them from among the response categories, so that a primacy effect can be predicted. The effects of reversing the order of the response scale on the latent factor structure underlying…

  3. Dependability of Anchoring Labels of Likert-Type Scales.

    ERIC Educational Resources Information Center

    Chang, Lei

    This study uses generalizability theory to examine the dependability of anchoring labels of Likert-type scales. Variance components associated with labeling were estimated in two samples using a two-facet random effect generalizability-study design. In one sample, 173 graduate students in education were administered 7 items measuring attitudes…

  4. Using Likert-Type Scales in the Social Sciences

    ERIC Educational Resources Information Center

    Croasmun, James T.; Ostrom, Lee

    2011-01-01

    Likert scales are useful in social science and attitude research projects. The General Self-Efficacy Exam is a test used to determine whether factors in educational settings affect participant's learning self-efficacy. The original instrument had 10 efficacy items and used a 4-point Likert scale. The Cronbach's alphas for the original test ranged…

  5. How Differences among Data Collectors Are Reflected in the Reliability and Validity of Data Collected by Likert-Type Scales?

    ERIC Educational Resources Information Center

    Köksal, Mustafa Serdar; Ertekin, Pelin; Çolakoglu, Özgür Murat

    2014-01-01

    The purpose of this study is to investigate association of data collectors' differences with the differences in reliability and validity of scores regarding affective variables (motivation toward science learning and science attitude) that are measured by Likert-type scales. Four researchers trained in data collection and seven science…

  6. A Two-Decision Model for Responses to Likert-Type Items

    ERIC Educational Resources Information Center

    Thissen-Roe, Anne; Thissen, David

    2013-01-01

    Extreme response set, the tendency to prefer the lowest or highest response option when confronted with a Likert-type response scale, can lead to misfit of item response models such as the generalized partial credit model. Recently, a series of intrinsically multidimensional item response models have been hypothesized, wherein tendency toward…

  7. Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide

    ERIC Educational Resources Information Center

    Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.

    2012-01-01

    This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…

  8. A Comparison of Anchor-Item Designs for the Concurrent Calibration of Large Banks of Likert-Type Items

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.; Alcala-Quintana, Rocio; Garcia-Cueto, Eduardo

    2010-01-01

    Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the…

  9. A mixed-binomial model for Likert-type personality measures

    PubMed Central

    Allik, Jüri

    2014-01-01

    Personality measurement is based on the idea that values on an unobservable latent variable determine the distribution of answers on a manifest response scale. Typically, it is assumed in the Item Response Theory (IRT) that latent variables are related to the observed responses through continuous normal or logistic functions, determining the probability with which one of the ordered response alternatives on a Likert-scale item is chosen. Based on an analysis of 1731 self- and other-rated responses on the 240 NEO PI-3 questionnaire items, it was proposed that a viable alternative is a finite number of latent events which are related to manifest responses through a binomial function which has only one parameter—the probability with which a given statement is approved. For the majority of items, the best fit was obtained with a mixed-binomial distribution, which assumes two different subpopulations who endorse items with two different probabilities. It was shown that the fit of the binomial IRT model can be improved by assuming that about 10% of random noise is contained in the answers and by taking into account response biases toward one of the response categories. It was concluded that the binomial response model for the measurement of personality traits may be a workable alternative to the more habitual normal and logistic IRT models. PMID:24847291

  10. Longitudinal Measurement Invariance of Likert-Type Learning Strategy Scales: Are We Using the Same Ruler at Each Wave?

    ERIC Educational Resources Information Center

    Coertjens, Liesje; Donche, Vincent; De Maeyer, Sven; Vanthournout, Gert; Van Petegem, Peter

    2012-01-01

    Whether or not learning strategies change during the course of higher education is an important topic in the Student Approaches to Learning field. However, there is a dearth of any empirical evaluations in the literature as to whether or not the instruments in this research domain measure equivalently over time. Therefore, this study details the…

  11. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  12. A Mathematical Approach in Evaluating Biotechnology Attitude Scale: Rough Set Data Analysis

    ERIC Educational Resources Information Center

    Narli, Serkan; Sinan, Olcay

    2011-01-01

    Individuals' thoughts and attitudes towards biotechnology have been investigated in many countries. A Likert-type scale is the most commonly used scale to measure attitude. However, the weak side of a likert-type scale is that different responses may produce the same score. The Rough set method has been regarded to address this shortcoming. A…

  13. Development of the perceptions of racism scale.

    PubMed

    Green, N L

    1995-01-01

    Racism may be a factor in low-birth-weight (LBW) and preterm delivery in African American childbearing women. Because no satisfactory measure of racism existed, the Perception of Racism Scale (PRS) was developed. The PRS was pilot tested on 109 participants from churches and community organizations. The scale was then used in a study of 136 childbearing women to investigate LBW and preterm delivery. Twenty items rated on a 4-point Likert-type scale were scored with 1 as the lowest and 4 as the highest perception of racism. Alpha reliabilities were .88 for the pilot and .91 for the study. Content validity was strengthened by expert panel critique. Reliability, content validity, and construct validity were demonstrated and no undue participant burden was observed. The scale is an effective instrument to measure perceptions of racism by African American women.

  14. Designing the Nuclear Energy Attitude Scale.

    ERIC Educational Resources Information Center

    Calhoun, Lawrence; And Others

    1988-01-01

    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  15. Reliability and Validity of the Transracial Adoption Parenting Scale

    ERIC Educational Resources Information Center

    Massatti, Richard R.; Vonk, M. Elizabeth; Gregoire, Thomas K.

    2004-01-01

    The present study provides information on the reliability and validity of the Transracial Adoption Parenting Scale (TAPS), a multidimensional 36-item Likert-type scale that measures cultural competence among transracial adoptive (TRA) parents. The TAPS was theoretically developed and refined through feedback from experts in TRA adoption. A…

  16. Factor Analysis of the Omega Scale: A Scale Designed To Measure the Attitudes of College Students toward Their Own Deaths and the Disposition of Their Bodies.

    ERIC Educational Resources Information Center

    Staik, Irene M.

    A study was undertaken to provide a factor analysis of the Omega Scale, a 25-item, Likert-type scale developed in 1984 to assess attitudes toward death and funerals and other body disposition practices. The Omega Scale was administered to 250 students enrolled in introductory psychology classes at two higher education institutions in Alabama.…

  17. Value-Eroding Teacher Behaviors Scale: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Arseven, Zeynep; Kiliç, Abdurrahman; Sahin, Seyma

    2016-01-01

    In the present study, it is aimed to develop a valid and reliable scale for determining value-eroding behaviors of teachers, hence their values of judgment. The items of the "Value-eroding Teacher Behaviors Scale" were designed in the form of 5-point likert type rating scale. The exploratory factor analysis (EFA) was conducted to…

  18. Preparing Attitude Scale to Define Students' Attitudes about Environment, Recycling, Plastic and Plastic Waste

    ERIC Educational Resources Information Center

    Avan, Cagri; Aydinli, Bahattin; Bakar, Fatma; Alboga, Yunus

    2011-01-01

    The aim of this study is to introduce an attitude scale in order to define students? attitudes about environment, recycling, plastics, plastic waste. In this study, 80 attitude sentences according to 5-point Likert-type scale were prepared and applied to 492 students of 6th grade in the Kastamonu city center of Turkey. The scale consists of…

  19. The Intuitive Eating Scale: Development and Preliminary Validation

    ERIC Educational Resources Information Center

    Hawks, Steven; Merrill, Ray M.; Madanat, Hala N.

    2004-01-01

    This article describes the development and validation of an instrument designed to measure the concept of intuitive eating. To ensure face and content validity for items used in the Likert-type Intuitive Eating Scale (IES), content domain was clearly specified and a panel of experts assessed the validity of each item. Based on responses from 391…

  20. Construct Validation and a More Parsimonious Mathematics Beliefs Scales.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    Teacher beliefs are instrumental in defining teacher pedagogical and content tasks and for processing information relevant to those tasks. In this study, a Likert-type instrument, Mathematics Beliefs Scales (E. Fennema, T. Carpenter, and M. Loef, 1990), was used to measure the mathematical beliefs of teachers. This instrument was designed with…

  1. Determination of Reliability and Validity for Myself as a Teacher Scale.

    ERIC Educational Resources Information Center

    Handley, Herbert M.; Thomson, James R., Jr.

    The reliability and validity of the Myself as a Teacher Scale (MTS), developed to assess the self-concept of teachers, were studied. Materials developed by David P. Butts and Robert Howe were used to construct a 62-item Likert-type scale asking individuals to rate themselves on certain criteria. After a pilot study with 92 preservice teachers and…

  2. Women's Liberation Scale (WLS): A Measure of Attitudes Toward Positions Advocated by Women's Groups.

    ERIC Educational Resources Information Center

    Goldberg, Carlos

    The Women's Liberation Scale (WLS) is a 14-item, Likert-type scale designed to measure attitudes toward positions advocated by women's groups. The WLS and its four-alternative response schema is presented, along with descriptive statistics of scores based on male and female college samples. Reliability and validity measures are reported, and the…

  3. Biomechanics of 4-point seat belt systems in frontal impacts.

    PubMed

    Rouhana, Stephen W; Bedewi, Paul G; Kankanala, Sundeep V; Prasad, Priya; Zwolinski, Joseph J; Meduvsky, Alex G; Rupp, Jonathan D; Jeffreys, Thomas A; Schneider, Lawrence W

    2003-01-01

    The biomechanical behavior of 4-point seat belt systems was investigated through MADYMO modeling, dummy tests and post mortem human subject tests. This study was conducted to assess the effect of 4-point seat belts on the risk of thoracic injury in frontal impacts, to evaluate the ability to prevent submarining under the lap belt using 4-point seat belts, and to examine whether 4-point belts may induce injuries not typically observed with 3-point seat belts. The performance of two types of 4-point seat belts was compared with that of a pretensioned, load-limited, 3-point seat belt. A 3-point belt with an extra shoulder belt that "crisscrossed" the chest (X4) appeared to add constraint to the torso and increased chest deflection and injury risk. Harness style shoulder belts (V4) loaded the body in a different biomechanical manner than 3-point and X4 belts. The V4 belt appeared to shift load to the clavicles and pelvis and to reduce traction of the shoulder belt across the chest, resulting in a reduction in chest deflection by a factor of two. This is associated with a 5 to 500-fold reduction in thoracic injury risk, depending on whether one assumes 4-point belts apply concentrated or distributed load. In four of six post mortem human subjects restrained by V4 belts during 40 km/h sled tests, chest compression was zero or negative and rib fractures were nearly eliminated. Submarining was not observed in any test with post mortem human subjects. Though lumbar, sacral and pelvic injuries were noted, they are believed to be due to the artificial restraint environment (no knee bolsters, instrument panels, steering systems or airbags). While they show significant potential to reduce thoracic injury risk, there are still many issues to be resolved before 4-point belts can be considered for production vehicles. These issues include, among others, potential effects on hard and soft neck tissues, of interaction with inboard shoulder belts in farside impacts and potential

  4. The Effects of Changes in the Order of Verbal Labels and Numerical Values on Children's Scores on Attitude and Rating Scales

    ERIC Educational Resources Information Center

    Betts, Lucy; Hartley, James

    2012-01-01

    Research with adults has shown that variations in verbal labels and numerical scale values on rating scales can affect the responses given. However, few studies have been conducted with children. The study aimed to examine potential differences in children's responses to Likert-type rating scales according to their anchor points and scale…

  5. A Scale for E-Content Preparation Skills: Development, Validity and Reliability

    ERIC Educational Resources Information Center

    Tekin, Ahmet; Polat, Ebru

    2016-01-01

    Problem Statement: For an effective teaching and learning process it is critical to provide support for teachers in the development of e-content, and teachers should play an active role in this development. Purpose of the Study: The purpose of this study is to develop a valid and reliable Likert-type scale that will determine pre-service teachers'…

  6. In Search of the Optimal Number of Response Categories in a Rating Scale

    ERIC Educational Resources Information Center

    Lee, Jihyun; Paek, Insu

    2014-01-01

    Likert-type rating scales are still the most widely used method when measuring psychoeducational constructs. The present study investigates a long-standing issue of identifying the optimal number of response categories. A special emphasis is given to categorical data, which were generated by the Item Response Theory (IRT) Graded-Response Modeling…

  7. The Development of a Competence Scale for Learning Science: Inquiry and Communication

    ERIC Educational Resources Information Center

    Chang, Huey-Por; Chen, Chin-Chang; Guo, Gwo-Jen; Cheng, Yeong-Jin; Lin, Chen-Yung; Jen, Tsung-Hau

    2011-01-01

    The objective of this study was to develop an instrument to measure school students' competence in learning science as part of a large research project in Taiwan. The instrument consisted of 29 self-report, Likert-type items divided into 2 scales: Competence in Scientific Inquiry and Competence in Communication. The Competence in Scientific…

  8. Measuring Pretest-Posttest Change with a Rasch Rating Scale Model.

    ERIC Educational Resources Information Center

    Wolfe, Edward W.; Chiu, Chris W. T.

    1999-01-01

    Describes a method for disentangling changes in persons from changes in the interpretation of Likert-type questionnaire items and the use of rating scales. The procedure relies on anchoring strategies to create a common frame of reference for interpreting measures taken at different times. Illustrates the use of these procedures using the FACETS…

  9. Exploratory Factor Analysis Study for the Scale of High School Students' Attitudes towards Chemistry

    ERIC Educational Resources Information Center

    Demircioglu, Gökhan; Aslan, Aysegül; Yadigaroglu, Mustafa

    2014-01-01

    It is important to develop students' positive attitudes chemistry lessons in school because research has suggested that attitudes are linked with academic achievement. Therefore, how to evaluate the attitudes is an important topic in education. The purpose of this study was to develop a Likert-type scale that could measure high school students'…

  10. Evaluation of Social Cognitive Scaling Response Options in the Physical Activity Domain

    ERIC Educational Resources Information Center

    Rhodes, Ryan E.; Matheson, Deborah Hunt; Mark, Rachel

    2010-01-01

    The purpose of this study was to compare the reliability, variability, and predictive validity of two common scaling response formats (semantic differential, Likert-type) and two numbers of response options (5-point, 7-point) in the physical activity domain. Constructs of the theory of planned behavior were chosen in this analysis based on its…

  11. A 4×4 point to point router based on microring resonators

    NASA Astrophysics Data System (ADS)

    Lu, Huanyu; Yang, Junbo; Zhang, Jingjing; Wu, Wenjun; Huang, Jie; Yang, Yuanjie

    2015-10-01

    A new 4×4 point to point router is investigated with the transfer matrix method. Its routing paths and low loss of power are successfully demonstrated. The proposed design is easily integrated to a larger scale with less microring resonators, and the power loss from the input port to the output port is demonstrated to be lower than 10%. All of the microrings designed here have the identical radii of 6.98 μm, and they are all in resonance at a wavelength of 1550 nm. Both the gap between the microring and the bus waveguide and the gap between two neighbouring rings are 100 nm. The width of bus waveguide as well as the microrings is designed to be 200 nm. Free spectral range (FSR) is supposed to be around 17 nm based on the parameters above. A large extinction ratio (ER) is also achieved, which shows the high coupling efficiency to a certain extent. Thermal tuning is employed to make the microrings be in resonance or not, not including the two microring resonators in the middle. In other words, the two microrings are always in resonance and transport signals when the input signals pass by them. Hence, only two microrings are needed to deal with if one wants to route a signal. Although this architecture is blocking and not available for multicasting and multiplexing, it is a valuable effort that could be available for some optical experiments on-chip, such as optical interconnection, optical router.

  12. Appropriate Statistical Analysis for Two Independent Groups of Likert-Type Data

    ERIC Educational Resources Information Center

    Warachan, Boonyasit

    2011-01-01

    The objective of this research was to determine the robustness and statistical power of three different methods for testing the hypothesis that ordinal samples of five and seven Likert categories come from equal populations. The three methods are the two sample t-test with equal variances, the Mann-Whitney test, and the Kolmogorov-Smirnov test. In…

  13. Report on 3- and 4-Point Correlation Statistics in COBE DMR Anisotropy Maps

    NASA Technical Reports Server (NTRS)

    Hinshaw, Gary; Gorski, Krzystof M.; Bennett, Charles L.; Banday, Anthony J.

    1996-01-01

    As part of the work performed under this contract, we have computed the 3- and 4-point correlation functions of the COBE-DMR 2-year and 4-year anisotropy maps. The results of our work showed that the 3-point correlation function is consistent with zero and that the 4-point function is not a very sensitive probe of non-Gaussian behavior in the COBE-DMR data.

  14. Report on 3 and 4-point correlation statistics in the COBE DMR anisotrophy maps

    NASA Technical Reports Server (NTRS)

    Hinshaw, Gary (Principal Investigator); Gorski, Krzystof M.; Banday, Anthony J.; Bennett, Charles L.

    1996-01-01

    As part of the work performed under NASA contract # NAS5-32648, we have computed the 3-point and 4-point correlation functions of the COBE-DNIR 2-year and 4-year anisotropy maps. The motivation for this study was to search for evidence of non-Gaussian statistical fluctuations in the temperature maps: skewness or asymmetry in the case of the 3-point function, kurtosis in the case of the 4-point function. Such behavior would have very significant implications for our understanding of the processes of galaxy formation, because our current models of galaxy formation predict that non-Gaussian features should not be present in the DMR maps. The results of our work showed that the 3-point correlation function is consistent with zero and that the 4-point function is not a very sensitive probe of non-Gaussian behavior in the COBE-DMR data. Our computation and analysis of 3-point correlations in the 2-year DMR maps was published in the Astrophysical Journal Letters, volume 446, page L67, 1995. Our computation and analysis of 3-point correlations in the 4-year DMR maps will be published, together with some additional tests, in the June 10, 1996 issue of the Astrophysical Journal Letters. Copies of both of these papers are attached as an appendix to this report.

  15. Measuring pretest-posttest change with a Rasch Rating Scale Model.

    PubMed

    Wolfe, E W; Chiu, C W

    1999-01-01

    When measures are taken on the same individual over time, it is difficult to determine whether observed differences are the result of changes in the person or changes in other facets of the measurement situation (e.g., interpretation of items or use of rating scale). This paper describes a method for disentangling changes in persons from changes in the interpretation of Likert-type questionnaire items and the use of rating scales (Wright, 1996a). The procedure relies on anchoring strategies to create a common frame of reference for interpreting measures that are taken at different times and provides a detailed illustration of how to implement these procedures using FACETS.

  16. Using Visual Analogue Scales in eHealth: Non-Response Effects in a Lifestyle Intervention

    PubMed Central

    Reips, Ulf-Dietrich; Wienert, Julian; Lippke, Sonia

    2016-01-01

    Background Visual analogue scales (VASs) have been shown to be valid measurement instruments and a better alternative to Likert-type scales in Internet-based research, both empirically and theoretically [1,2]. Upsides include more differentiated responses, better measurement level, and less error. Their feasibility and properties in the context of eHealth, however, have not been examined so far. Objective The present study examined VASs in the context of a lifestyle study conducted online, measuring the impact of VASs on distributional properties and non-response. Method A sample of 446 participants with a mean age of 52.4 years (standard deviation (SD) = 12.1) took part in the study. The study was carried out as a randomized controlled trial, aimed at supporting participants over 8 weeks with an additional follow-up measurement. In addition to the randomized questionnaire, participants were further randomly assigned to either a Likert-type or VAS response scale version of the measures. Results Results showed that SDs were lower for items answered via VASs, 2P (Y ≥ 47 | n=55, P=.5) < .001. Means did not differ across versions. Participants in the VAS version showed lower dropout rates than participants in the Likert version, odds ratio = 0.75, 90% CI (0.58-0.98), P=.04. Number of missing values did not differ between questionnaire versions. Conclusions The VAS is shown to be a valid instrument in the eHealth context, offering advantages over Likert-type scales. The results of the study provide further support for the use of VASs in Internet-based research, extending the scope to senior samples in the health context. Trial Registration Clinicaltrials.gov NCT01909349; https://clinicaltrials.gov/ct2/show/NCT01909349 (Archived by WebCite at http://www.webcitation.org/6h88sLw2Y) PMID:27334562

  17. Measures to Combat Research Phobia among Undergraduates for Knowledge Creation in Imo State

    ERIC Educational Resources Information Center

    Ihebereme, Chioma I.

    2012-01-01

    The study examined the measures to combat research phobia among undergraduates in order to achieve knowledge creation. The study used Alvan Ikoku Federal College of Education Owerri in Imo State as case study. An 11-item four point Likert-type scale of Agreed (A) = 4 points, Strongly Agreed (SA) = 3 points, Disagreed (D) = 2 points and Strongly…

  18. Arthroscopic 4-Point Suture Fixation of Anterior Cruciate Ligament Tibial Avulsion Fractures

    PubMed Central

    Boutsiadis, Achilleas; Karataglis, Dimitrios; Agathangelidis, Filon; Ditsios, Konstantinos; Papadopoulos, Pericles

    2014-01-01

    Tibial eminence avulsion fractures are rare injuries occurring mainly in adolescents and young adults. When necessary, regardless of patient age, anatomic reduction and stable internal fixation are mandatory for fracture healing and accurate restoration of normal knee biomechanics. Various arthroscopically assisted fixation methods with sutures, anchors, wires, or screws have been described but can be technically demanding, thus elongating operative times. The purpose of this article is to present a technical variation of arthroscopic suture fixation of anterior cruciate ligament avulsion fractures. Using thoracic drain needles over 2.4-mm anterior cruciate ligament tibial guidewires, we recommend the safe and easy creation of four 2.9-mm tibial tunnels at different angles and at specific points. This technique uses thoracic drain needles as suture passage cannulas and offers 4-point fixation stability, avoiding potential complications of bony bridge fracture and tunnel connection. PMID:25685674

  19. Scale Development for Measuring and Predicting Adolescents’ Leisure Time Physical Activity Behavior

    PubMed Central

    Ries, Francis; Romero Granados, Santiago; Arribas Galarraga, Silvia

    2009-01-01

    The aim of this study was to develop a scale for assessing and predicting adolescents’ physical activity behavior in Spain and Luxembourg using the Theory of Planned Behavior as a framework. The sample was comprised of 613 Spanish (boys = 309, girls = 304; M age =15.28, SD =1.127) and 752 Luxembourgish adolescents (boys = 343, girls = 409; M age = 14.92, SD = 1.198), selected from students of two secondary schools in both countries, with a similar socio-economic status. The initial 43-items were all scored on a 4-point response format using the structured alternative format and translated into Spanish, French and German. In order to ensure the accuracy of the translation, standardized parallel back-translation techniques were employed. Following two pilot tests and subsequent revisions, a second order exploratory factor analysis with oblimin direct rotation was used for factor extraction. Internal consistency and test-retest reliabilities were also tested. The 4-week test-retest correlations confirmed the items’ time stability. The same five factors were obtained, explaining 63.76% and 63.64% of the total variance in both samples. Internal consistency for the five factors ranged from α = 0.759 to α = 0. 949 in the Spanish sample and from α = 0.735 to α = 0.952 in the Luxembourgish sample. For both samples, inter-factor correlations were all reported significant and positive, except for Factor 5 where they were significant but negative. The high internal consistency of the subscales, the reported item test-retest reliabilities and the identical factor structure confirm the adequacy of the elaborated questionnaire for assessing the TPB-based constructs when used with a population of adolescents in Spain and Luxembourg. The results give some indication that they may have value in measuring the hypothesized TPB constructs for PA behavior in a cross-cultural context. Key points When using the structured alternative format, weak internal consistency was obtained

  20. Nonlinear electric reaction arising in dry bone subjected to 4-point bending

    NASA Astrophysics Data System (ADS)

    Murasawa, Go; Cho, Hideo; Ogawa, Kazuma

    2007-04-01

    Bone is a smart, self-adaptive and also partly self-repairing tissue. In recent years, many researchers seek to find how to give the effective mechanical stimulation to bone, because it is the predominant loading that determines the bone shape and macroscopic structure. However, the trial of regeneration of bone is still under way. On the other hand, it has been known that electrical potential generates from bone by mechanical stimulation (Yasuda, 1977; Williams, 1982; Starkebaum, 1979; Cochran, 1968; Lanyon, 1977; Salzstein, 1987a,b; Friedenberg, 1966). This is called "stress-generated potential (SGP)". The process of information transfer between "strain" and "cells" is not still clear. But, there is some possibility that SGP has something to do with the process of information transfer. If the electrical potential is more clear under some mechanical loadings, we will be able to regenerate bone artificially and freely. Therefore, it is important to investigate SGP in detail. The aim of present study is to investigate the electric reaction arising in dry bone subjected to mechanical loadings at high amplitude and low frequency strain. Firstly, specimen is fabricated from femur of cow. Next, the speeds of wave propagation in bone are tried to measure by laser ultra sonic technique and wavelet transform, because these have relationship with bone density. Secondary, 4-point bending test is conducted up to fracture. Then, electric reaction arising in bone is measured during loading. Finally, cyclic 4-point bending tests are conducted to investigate the electric reaction arising in bone at low frequency strain.

  1. Moderated regression analysis and Likert scales: too coarse for comfort.

    PubMed

    Russell, C J; Bobko, P

    1992-06-01

    One of the most commonly accepted models of relationships among three variables in applied industrial and organizational psychology is the simple moderator effect. However, many authors have expressed concern over the general lack of empirical support for interaction effects reported in the literature. We demonstrate in the current sample that use of a continuous, dependent-response scale instead of a discrete, Likert-type scale, causes moderated regression analysis effect sizes to increase an average of 93%. We suggest that use of relatively coarse Likert scales to measure fine dependent responses causes information loss that, although varying widely across subjects, greatly reduces the probability of detecting true interaction effects. Specific recommendations for alternate research strategies are made. PMID:1601825

  2. SiC-CMC-Zircaloy-4 Nuclear Fuel Cladding Performance during 4-Point Tubular Bend Testing

    SciTech Connect

    IJ van Rooyen; WR Lloyd; TL Trowbridge; SR Novascone; KM Wendt; SM Bragg-Sitton

    2013-09-01

    The U.S. Department of Energy Office of Nuclear Energy (DOE NE) established the Light Water Reactor Sustainability (LWRS) program to develop technologies and other solutions to improve the reliability, sustain the safety, and extend the life of current reactors. The Advanced LWR Nuclear Fuel Development Pathway in the LWRS program encompasses strategic research focused on improving reactor core economics and safety margins through the development of an advanced fuel cladding system. Recent investigations of potential options for “accident tolerant” nuclear fuel systems point to the potential benefits of silicon carbide (SiC) cladding. One of the proposed SiC-based fuel cladding designs being investigated incorporates a SiC ceramic matrix composite (CMC) as a structural material supplementing an internal Zircaloy-4 (Zr-4) liner tube, referred to as the hybrid clad design. Characterization of the advanced cladding designs will include a number of out-of-pile (nonnuclear) tests, followed by in-pile irradiation testing of the most promising designs. One of the out-of-pile characterization tests provides measurement of the mechanical properties of the cladding tube using four point bend testing. Although the material properties of the different subsystems (materials) will be determined separately, in this paper we present results of 4-point bending tests performed on fully assembled hybrid cladding tube mock-ups, an assembled Zr-4 cladding tube mock-up as a standard and initial testing results on bare SiC-CMC sleeves to assist in defining design parameters. The hybrid mock-up samples incorporated SiC-CMC sleeves fabricated with 7 polymer impregnation and pyrolysis (PIP) cycles. To provide comparative information; both 1- and 2-ply braided SiC-CMC sleeves were used in this development study. Preliminary stress simulations were performed using the BISON nuclear fuel performance code to show the stress distribution differences for varying lengths between loading points

  3. Scales

    MedlinePlus

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Eczema , ringworm , and psoriasis ...

  4. Internal consistency of a five-item form of the Francis Scale of Attitude Toward Christianity among adolescent students.

    PubMed

    Campo-Arias, Adalberto; Oviedo, Heidi Celina; Cogollo, Zuleima

    2009-04-01

    The short form of the Francis Scale of Attitude Toward Christianity (L. J. Francis, 1992) is a 7-item Likert-type scale that shows high homogeneity among adolescents. The psychometric performance of a shorter version of this scale has not been explored. The authors aimed to determine the internal consistency of a 5-item form of the Francis Scale of Attitude Toward Christianity among 405 students from a school in Cartagena, Colombia. The authors computed the Cronbach's alpha coefficient for the 5 items with a greater corrected item-total punctuation correlation. The version without Items 2 and 7 showed internal consistency of .87. The 5-item version of the Francis Scale of Attitude Toward Christianity exhibited higher internal consistency than did the 7-item version. Future researchers should corroborate this finding. PMID:19425361

  5. Scale

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2009-01-01

    The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail…

  6. Scales

    SciTech Connect

    Murray Gibson

    2007-04-27

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  7. Scales

    ScienceCinema

    Murray Gibson

    2016-07-12

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  8. Selection, Optimization, and Compensation: The Structure, Reliability, and Validity of Forced-Choice versus Likert-Type Measures in a Sample of Late Adolescents

    ERIC Educational Resources Information Center

    Geldhof, G. John; Gestsdottir, Steinunn; Stefansson, Kristjan; Johnson, Sara K.; Bowers, Edmond P.; Lerner, Richard M.

    2015-01-01

    Intentional self-regulation (ISR) undergoes significant development across the life span. However, our understanding of ISR's development and function remains incomplete, in part because the field's conceptualization and measurement of ISR vary greatly. A key sample case involves how Baltes and colleagues' Selection, Optimization,…

  9. Development and Validation of a Scale to Assess Students' Attitude towards Animal Welfare

    NASA Astrophysics Data System (ADS)

    Mazas, Beatriz; Rosario Fernández Manzanal, Mª; Zarza, Francisco Javier; Adolfo María, Gustavo

    2013-07-01

    This work presents the development of a scale of attitudes of secondary-school and university students towards animal welfare. A questionnaire was drawn up following a Likert-type scale attitude assessment model. Four components or factors, which globally measure animal welfare, are proposed to define the object of the attitude. The components are animal abuse for pleasure or due to ignorance (C1), leisure with animals (C2), farm animals (C3) and animal abandonment (C4). The final version of the questionnaire contains 29 items that are evenly distributed among the four components indicated, guaranteeing that each component is one-dimensional. A sample of 329 students was used to validate the scale. These students were aged between 11 and 25, and were from secondary schools in Aragon and the University in Zaragoza (Aragon's main and largest city, located in NE Spain). The scale shows good internal reliability, with a Cronbach's alpha value of 0.74. The questionnaire was later given to 1,007 students of similar levels and ages to the sample used in the validation, the results of which are presented in this study. The most relevant results show significant differences in gender and level of education in some of the components of the scale, observing that women and university students rate animal welfare more highly.

  10. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model. PMID:25372707

  11. Reliability and validity of the Student Perceptions of School Cohesion Scale in a sample of Salvadoran secondary school students

    PubMed Central

    2009-01-01

    Background Despite a growing body of research from the United States and other industrialized countries on the inverse association between supportive social relationships in the school and youth risk behavior engagement, research on the measurement of supportive school social relationships in Central America is limited. We examined the psychometric properties of the Student Perceptions of School Cohesion (SPSC) scale, a 10-item scale that asks students to rate with a 5-point Likert-type response scale their perceptions of the school social environment, in a sample of public secondary school students (mean age = 15 years) living in central El Salvador. Methods Students (n = 982) completed a self-administered questionnaire that included the SPSC scale along with measures of youth health risk behaviors based on the Center for Disease Control and Prevention's Youth Risk Behavior Survey. Exploratory factor analysis was used to assess the factor structure of the scale, and two internal consistency estimates of reliability were computed. Construct validity was assessed by examining whether students who reported low school cohesion were significantly more likely to report physical fighting and illicit drug use. Results Results indicated that the SPSC scale has three latent factors, which explained 61.6% of the variance: supportive school relationships, student-school connectedness, and student-teacher connectedness. The full scale and three subscales had good internal consistency (rs = .87 and α = .84 for the full scale; rs and α between .71 and .75 for the three subscales). Significant associations were found between the full scale and all three subscales with physical fighting (p ≤ .001) and illicit drug use (p < .05). Conclusion Findings provide evidence of reliability and validity of the SPSC for the measurement of supportive school relationships in Latino adolescents living in El Salvador. These findings provide a foundation for further research on school cohesion

  12. Scale development to measure attitudes toward unauthorized migration into a foreign country.

    PubMed

    VAN DER Veer, Kees; Ommundsen, Reidar; Krumov, Krum; VAN LE, Hao; Larsen, Knud S

    2008-08-01

    This study reports on the development and cross-national utility of a Likert type scale measuring attitudes toward unauthorized migration into a foreign country in two samples from "migrant-sending" nations. In the first phase a pool of 86 attitude statements were administered to a sample of 505 undergraduate students in Bulgaria (22.5% male; M age = 23, SD = 4.8). Exploratory factor analysis resulted in six factors, and a reduction to 34 items. The results yielded an overall alpha of (0.92) and alpha for subscales ranging from 0.70 to 0.89. In the second phase the 34-item scale was administered in a survey of 180 undergraduates from Sofia University in Bulgaria (16.7% male, M age = 23, SD = 4.8), plus 150 undergraduates from Hanoi State University in Vietnam (14.7% male, M age = 19, SD = 1.8). Results yielded a 19-item scale with no gender differences, and satisfactory alpha coefficients for a Vietnamese and Bulgarian sample of 0.87 and 0.89 respectively. This scale, equally applicable in both samples, includes items that reflect salient topics of concept of attitudes towards unauthorized migration. An exploratory principal component analysis of the Bulgarian and Vietnamese version of the 19-item scale yielded three factors accounting for 54% and 47% of the variance respectively. A procrustes analysis indicates high conceptual equivalence in the two samples for factor 1 and 2, and moderate for factor 3. This study lends support to the idea that despite different cultural meanings there is a common meaning space in culturally diverse societies.

  13. A preliminary study to measure and develop job satisfaction scale for medical teachers

    PubMed Central

    Bhatnagar, Kavita; Srivastava, Kalpana; Singh, Amarjit; Jadav, S.L.

    2011-01-01

    Background: Job satisfaction of medical teachers has an impact on quality of medical education and patient care. In this background, the study was planned to develop scale and measure job satisfaction status of medical teachers. Materials and Methods: To generate items pertaining to the scale of job satisfaction, closed-ended and open-ended questionnaires were administered to medical professionals. The job satisfaction questionnaire was developed and rated on Likert type of rating scale. Both quantitative and qualitative methods were used to ascertain job satisfaction among 245 health science faculty of an autonomous educational institution. Factor loading was calculated and final items with strong factor loading were selected. Data were statistically evaluated. Results: Average job satisfaction score was 53.97 on a scale of 1–100. The Cronbach's alpha reliability coefficient was 0.918 for entire set of items. There was statistically significant difference in job satisfaction level across different age groups (P 0.0358) showing a U-shaped pattern and fresh entrants versus reemployed faculty (P 0.0188), former showing lower satisfaction. Opportunity for self-development was biggest satisfier, followed by work, opportunity for promotion, and job security. Factors contributing toward job dissatisfaction were poor utilization of skills, poor promotional prospects, inadequate pay and allowances, work conditions, and work atmosphere. Conclusion: Tertiary care teaching hospitals in autonomous educational institutions need to build infrastructure and create opportunities for their medical professional. Job satisfaction of young entrants needs to be raised further by improving their work environment. This will pave the way for effective delivery of health care. PMID:23271862

  14. Validation of Scale of Commitment to Democratic Values among Secondary Students

    ERIC Educational Resources Information Center

    Gafoor, K. Abdul

    2015-01-01

    This study reports development of a reliable and valid instrument for assessing the commitment to democratic values among secondary school students in Kerala from 57 likert type statements originally developed in 2007 by Gafoor and Thushara to assess commitment to nine values avowed in the Indian Constitution. Nine separate maximum likelihood…

  15. The Impact of Outliers on Cronbach's Coefficient Alpha Estimate of Reliability: Ordinal/Rating Scale Item Responses

    ERIC Educational Resources Information Center

    Liu, Yan; Wu, Amery D.; Zumbo, Bruno D.

    2010-01-01

    In a recent Monte Carlo simulation study, Liu and Zumbo showed that outliers can severely inflate the estimates of Cronbach's coefficient alpha for continuous item response data--visual analogue response format. Little, however, is known about the effect of outliers for ordinal item response data--also commonly referred to as Likert, Likert-type,…

  16. [Standardization of the Greek version of Zung's Self-rating Anxiety Scale (SAS)].

    PubMed

    Samakouri, M; Bouhos, G; Kadoglou, M; Giantzelidou, A; Tsolaki, K; Livaditis, M

    2012-01-01

    Self-rating Anxiety Scale (SAS), introduced by Zung, has been widely used in research and in clinical practice for the detection of anxiety. The present study aims at standardizing the Greek version of SAS. SAS consists of 20 items rated on a 1-4 likert type scale. The total SAS score may vary from 20 (no anxiety at all) to 80 (severe anxiety). Two hundred and fifty four participants (114 male and 140 female), psychiatric patients, physically ill and general population individuals, aged 45.40±11.35 years, completed the following: (a) a demographic characteristics' questionnaire, (b) the SAS Greek version, (c) the Spielberg's Modified Greek State-Trait Anxiety Scale (STAI-Gr.-X) and (d) the Zung Depression Rating Scale (ZDRS). Seventy six participants answered the SAS twice within a 12th-day median period of time. The following parameters were calculated: (a) internal consistency of the SAS in terms of Cronbach's α co-efficient, (b) its test-retest reliability in terms of the Intraclass Correlation Coefficient (ICC) and (c) its concurrent and convergent validities through its score's Spearman's rho correlations with both the state and trait subscales of STAI-Gr X and the ZDRS. In addition, in order to evaluate SAS' discriminant validity, the scale's scores of the three groups of participants (psychiatric patients, physically ill and general population individuals) were compared among each other, in terms of Kruskall Wallis and Mann Whitney U tests. SAS Cronbach's alpha equals 0.897 while ICC regarding its test-retest reliability equals 0.913. Spearman's rho concerning validity: (a) when SAS is compared to STAI-Gr.-X (state), equals it 0.767, (b) when SAS is compared to STAI-Gr. X (trait), it equals 0.802 and (c) when SAS is compared to ZDRS, it equals 0.835. The mentally ill scored significantly higher in SAS compared to both the healthy and the general population. In conclusion, the SAS Greek version presents very satisfactory psychometric properties regarding

  17. Physician leadership styles and effectiveness: an empirical study.

    PubMed

    Xirasagar, Sudha; Samuels, Michael E; Stoskopf, Carleen H

    2005-12-01

    The authors study the association between physician leadership styles and leadership effectiveness. Executive directors of community health centers were surveyed (269 respondents; response rate = 40.9 percent) for their perceptions of the medical director's leadership behaviors and effectiveness, using an adapted Multifactor Leadership Questionnaire (43 items on a 0-4 point Likert-type scale), with additional questions on demographics and the center's clinical goals and achievements. The authors hypothesize that transformational leadership would be more positively associated with executive directors' ratings of effectiveness, satisfaction with the leader, and subordinate extra effort, as well as the center's clinical goal achievement, than transactional or laissez-faire leadership. Separate ordinary least squares regressions were used to model each of the effectiveness measures, and general linear model regression was used to model clinical goal achievement. Results support the hypothesis and suggest that physician leadership development using the transformational leadership model may result in improved health care quality and cost control. PMID:16330822

  18. Validity of the Children's Orientation to Book Reading Rating Scale

    ERIC Educational Resources Information Center

    Kaderavek, Joan N.; Guo, Ying; Justice, Laura M.

    2014-01-01

    The present study investigates the validity of a 4-point rating scale used to measure the level of preschool children's orientation to literacy during shared book reading. Validity was explored by (a) comparing the children's level of literacy orientation as measured with the "Children's Orientation to Book Reading Rating…

  19. Scale and scaling in soils

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Scale is recognized as a central concept in the description of the hierarchical organization of our world. Pressing environmental and societal problems such require an understanding of how processes operate at different scales, and how they can be linked across scales. Soil science as many other dis...

  20. Millimeter Scale.

    ERIC Educational Resources Information Center

    Harvill, Leo M.

    This absolute scale contains nine times, each of which consists of a 100 millimeter vertical line with small division marks every 25 millimeters with the words "high" at the top and "low" at the bottom of the line. Above each of the vertical lines is a word or phrase. For the second grade scale these words are: arithmetic, counting, adding,…

  1. Activity Scale.

    ERIC Educational Resources Information Center

    Kerpelman, Larry C.; Weiner, Michael J.

    This twenty-four item scale assesses students' actual and desired political-social activism in terms of physical participation, communication activities, and information-gathering activities. About ten minutes are required to complete the instrument. The scale is divided into two subscales. The first twelve items (ACT-A) question respondents on…

  2. Scale interactions

    NASA Astrophysics Data System (ADS)

    Snow, John T.

    Since the time of the first world war, investigation of synoptic processes has been a major focus of atmospheric research. These are the physical processes that drive the continuously evolving pattern of high and low pressure centers and attendant frontal boundaries that are to be seen on continental-scale weather maps. This effort has been motivated both by a spirit of scientific inquiry and by a desire to improve operational weather forecasting by national meteorological services. These national services in turn have supported the development of a global observational network that provides the data required for both operational and research purposes. As a consequence of this research, there now exists a reasonable physical understanding of many of the phenomena found at this synoptic scale. This understanding is reflected in the numerical weather forecast models used by the national services. These have shown significant skill in predicting the evolution of synoptic-scale features for periods extending out to five days.

  3. Scaling Rules!

    NASA Astrophysics Data System (ADS)

    Malkinson, Dan; Wittenberg, Lea

    2015-04-01

    Scaling is a fundamental issue in any spatially or temporally hierarchical system. Defining domains and identifying the boundaries of the hierarchical levels may be a challenging task. Hierarchical systems may be broadly classified to two categories: compartmental and continuous ones. Examples of compartmental systems include: governments, companies, computerized networks, biological taxonomy and others. In such systems the compartments, and hence the various levels and their constituents are easily delineated. In contrast, in continuous systems, such as geomorphological, ecological or climatological ones, detecting the boundaries of the various levels may be difficult. We propose that in continuous hierarchical systems a transition from one functional scale to another is associated with increased system variance. Crossing from a domain of one scale to the domain of another is associated with a transition or substitution of the dominant drivers operating in the system. Accordingly we suggest that crossing this boundary is characterized by increased variance, or a "variance leap", which stabilizes, until crossing to the next domain or hierarchy level. To assess this we compiled sediment yield data from studies conducted at various spatial scales and from different environments. The studies were partitioned to ones conducted in undisturbed environments, and those conducted in disturbed environments, specifically by wildfires. The studies were conducted in plots as small as 1 m2, and watersheds larger than 555000 ha. Regressing sediment yield against plot size, and incrementally calculating the variance in the systems, enabled us to detect domains where variance values were exceedingly high. We propose that at these domains scale-crossing occurs, and the systems transition from one hierarchical level to another. Moreover, the degree of the "variance leaps" characterizes the degree of connectivity among the scales.

  4. Scaling satan.

    PubMed

    Wilson, K M; Huff, J L

    2001-05-01

    The influence on social behavior of beliefs in Satan and the nature of evil has received little empirical study. Elaine Pagels (1995) in her book, The Origin of Satan, argued that Christians' intolerance toward others is due to their belief in an active Satan. In this study, more than 200 college undergraduates completed the Manitoba Prejudice Scale and the Attitudes Toward Homosexuals Scale (B. Altemeyer, 1988), as well as the Belief in an Active Satan Scale, developed by the authors. The Belief in an Active Satan Scale demonstrated good internal consistency and temporal stability. Correlational analyses revealed that for the female participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men and intolerance toward ethnic minorities. For the male participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men but was not significantly related to intolerance toward ethnic minorities. Results of this research showed that it is possible to meaningfully measure belief in an active Satan and that such beliefs may encourage intolerance toward others.

  5. Scaling satan.

    PubMed

    Wilson, K M; Huff, J L

    2001-05-01

    The influence on social behavior of beliefs in Satan and the nature of evil has received little empirical study. Elaine Pagels (1995) in her book, The Origin of Satan, argued that Christians' intolerance toward others is due to their belief in an active Satan. In this study, more than 200 college undergraduates completed the Manitoba Prejudice Scale and the Attitudes Toward Homosexuals Scale (B. Altemeyer, 1988), as well as the Belief in an Active Satan Scale, developed by the authors. The Belief in an Active Satan Scale demonstrated good internal consistency and temporal stability. Correlational analyses revealed that for the female participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men and intolerance toward ethnic minorities. For the male participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men but was not significantly related to intolerance toward ethnic minorities. Results of this research showed that it is possible to meaningfully measure belief in an active Satan and that such beliefs may encourage intolerance toward others. PMID:11577971

  6. Nuclear scales

    SciTech Connect

    Friar, J.L.

    1998-12-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the {pi}-{gamma} force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.

  7. Validating the Rett Syndrome Gross Motor Scale.

    PubMed

    Downs, Jenny; Stahlhut, Michelle; Wong, Kingsley; Syhler, Birgit; Bisgaard, Anne-Marie; Jacoby, Peter; Leonard, Helen

    2016-01-01

    Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age and genotype were investigated. Clinical assessment scores for 38 girls and women with Rett syndrome who attended the Danish Center for Rett Syndrome were used to assess consistency of measurement. Principal components analysis enabled the calculation of three factor scores: Sitting, Standing and Walking, and Challenge. Motor scores were poorer with increasing age and those with the p.Arg133Cys, p.Arg294* or p.Arg306Cys mutation achieved higher scores than those with a large deletion. The repeatability of clinical assessment was excellent (intraclass correlation coefficient for total score 0.99, 95% CI 0.93-0.98). The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice and clinical trials.

  8. Development and Validation of a Falls Grading Scale

    PubMed Central

    Davalos-Bichara, Marcela; Lin, Frank R.; Carey, John P.; Walston, Jeremy D.; Fairman, Jennifer E.; Schubert, Michael C.; Barron, Jeremy S.; Hughes, Jennifer; Millar, Jennifer; Spar, Anne; Weber, Kristy L.; Ying, Howard S.; Zackowski, Kathleen M.; Zee, David

    2013-01-01

    Background and Purpose The recording of fall events is usually subjective and imprecise, which limits clinical practice and falls-related research. We sought to develop and validate a scale to grade near-fall and fall events based on their severity represented by the use of healthcare resources, with the goal of standardizing fall reporting in the clinical and research settings. Methods Qualitative instrument development was based on a literature review and semi-structured interviews to assess face and content validity. We queried older individuals and healthcare professionals with expertise in the care of patients at risk of falling about clinically important differences to detect and how to optimize the scale's ease of use. To assess the scale's inter-rater reliability, we created 30 video-vignettes of falls and compared how healthcare professionals and volunteers rated each of the falls according to our grading scale. Results We developed the illustrated 4-point Hopkins Falls Grading Scale (HFGS). The grades distinguish a near-fall (Grade 1) from a fall for which an individual did not receive medical attention (Grade 2), a fall associated with medical attention but not hospital admission (Grade 3), and a fall associated with hospital admission (Grade 4). Overall, the HFGS exhibited good face and content validity, and had an intraclass correlation coefficient of 0.998. Conclusion The 4-point HFGS demonstrates good face and content validity and high inter-rater reliability. We predict this tool will facilitate the standardization of falls reporting in both the clinical and research settings. PMID:22810170

  9. Designing, Testing, and Validating an Attitudinal Survey on an Environmental Topic: A Groundwater Pollution Survey Instrument for Secondary School Students

    ERIC Educational Resources Information Center

    Lacosta-Gabari, Idoya; Fernandez-Manzanal, Rosario; Sanchez-Gonzalez, Dolores

    2009-01-01

    Research in environmental attitudes' assessment has significantly increased in recent years. The development of specific attitude scales for specific environmental problems has often been proposed. This paper describes the Groundwater Pollution Test (GPT), a 19-item survey instrument using a Likert-type scale. The survey has been used with…

  10. Attitude, Gender and Achievement in Computer Programming

    ERIC Educational Resources Information Center

    Baser, Mustafa

    2013-01-01

    The aim of this research was to explore the relationship among students' attitudes toward programming, gender and academic achievement in programming. The scale used for measuring students' attitudes toward programming was developed by the researcher and consisted of 35 five-point Likert type items in four subscales. The scale was administered to…

  11. Rural Principal Attitudes toward Poverty and the Poor

    ERIC Educational Resources Information Center

    Gholson, Melissa L.

    2015-01-01

    This study used Yun and Weaver's (2010) Attitudes toward Poverty Short Form (ATP-SF) of twenty-one items on a Likert-type scale to determine the poverty attitudes of 309 principals in a rural Appalachian state in the United States. The study compared the poverty attitudes from the ATP-SF scaled score as a dependent variable to the following…

  12. An Evaluation of the Effectiveness of the New Primary School Mathematics Curriculum in Practice

    ERIC Educational Resources Information Center

    Gomleksiz, Mehmet Nuri; Bulut, Ilhami

    2007-01-01

    The aim of this study is to determine and compare the views of primary school teachers on the implementation and effectiveness of the new primary school mathematics curriculum. For that aim, a 32-item Likert-type Mathematics Curriculum Scale was developed. The reliability of the scale was tested through Cronbach Alpha (0.98), Spearman-Brown (0.93)…

  13. The Effect on Prospective Teachers of the Learning Environment Supported by Dynamic Statistics Software

    ERIC Educational Resources Information Center

    Koparan, Timur

    2016-01-01

    In this study, the effect on the achievement and attitudes of prospective teachers is examined. With this aim ahead, achievement test, attitude scale for statistics and interviews were used as data collection tools. The achievement test comprises 8 problems based on statistical data, and the attitude scale comprises 13 Likert-type items. The study…

  14. Measuring Group Dynamics: An Exploratory Trial

    ERIC Educational Resources Information Center

    Phan, Loan T.; Rivera, Edil Torres; Volker, Martin A.; Garrett, Michael T.

    2004-01-01

    This article reports on the development of a scale used to assess and measure group dynamics during group supervision counselling courses (practicum and internship). A 20-item Likert-type scale was administered to 200 counsellors-in-training master's students. Reliability and validity data are described. An exploratory factor analysis yielded…

  15. The Content of a College-Level Outdoor Leadership Course.

    ERIC Educational Resources Information Center

    Green, Paul

    This research study used the Delphi technique to determine the ideal content of a college-level outdoor leadership course for land-based outdoor pursuits in the Pacific Northwest. Topics were generated and value-rated by 61 Pacific Northwest outdoor leaders using a Likert-type scale in three separate questionnaires. Thirty-five topics were…

  16. P-Type Factor Analyses of Individuals' Thought Sampling Data.

    ERIC Educational Resources Information Center

    Hurlburt, Russell T.; Melancon, Susan M.

    Recently, interest in research measuring stream of consciousness or thought has increased. A study was conducted, based on a previous study by Hurlburt, Lech, and Saltman, in which subjects were randomly interrupted to rate their thoughts and moods on a Likert-type scale. Thought samples were collected from 27 subjects who carried random-tone…

  17. Second Field Test of the AEL Measure of School Capacity for Improvement

    ERIC Educational Resources Information Center

    Copley, Lisa D.; Meehan, Merrill L.; Howley, Caitlin W.; Hughes, Georgia K.

    2005-01-01

    The major purpose of the second field test of the AEL MSCI instrument was to assess the psychometric properties of the refined version with a larger, more diverse group of respondents. The first objective of this field test was to expand the four-point Likert-type response scale to six points in order to yield more variance in responses. The…

  18. Communication and Culture: Interpersonal Attraction.

    ERIC Educational Resources Information Center

    Brown, Lydia Ledesma; Emry, Robert A.

    Cultural differences in interpersonal attraction were studied using 93 black, 112 Chicano, and 112 white college students who completed 40 Likert-type rating scales for each of four concepts of attraction (intimate, friendship, acquaintance, and stranger attraction). When a factor solution was generated, differences were noted in the amount of…

  19. Listener's Preference for Music of Other Cultures: Comparing Response Modes.

    ERIC Educational Resources Information Center

    Brittin, Ruth V.

    1996-01-01

    Compares the preferences of university and middle school students for non-western music as communicated through a Likert-type scale, manipulation of one dial on a Continuous Response Digital Interface (CRDI) during music listening, and manipulation of two dials on a CRDI during music listening. (MJP)

  20. Teacher Leader Human Relations Skills: A Comparative Study

    ERIC Educational Resources Information Center

    Roby, Douglas E.

    2012-01-01

    In this study, 142 graduate school teachers working in schools throughout southwestern Ohio assessed their human relation skills. A human relations survey was used for the study, and results were compared with colleagues assessing the teachers in the study. The survey was developed using a Likert-type scale, and was based on key elements affecting…

  1. The Perceptions of Students, Teachers, and Educational Officers in Ghana on the Role of Computer and the Teacher in Promoting the First Five Principles of Instruction

    ERIC Educational Resources Information Center

    Sarfo, Frederick Kwaku; Ansong-Gyimah, Kwame

    2010-01-01

    This study explored the perceptions of 395 participants (students, teachers, and education officers) in Ghana on the role of the computer and the teacher in promoting the first five principles of instruction for quality teaching and learning. To achieve the intention of the study, five point Likert-type scales based on the first five principles of…

  2. Evaluating a Geology Curriculum for Non-Majors.

    ERIC Educational Resources Information Center

    Boone, William J.

    Two key factors affecting the success of non-major science courses are students' perceptions of topic difficulty and interest. An attitudinal survey administered to 300 college students, after completion of a college science course, evaluated their attitudes toward a geology curriculum. Using a Likert type scale students rated their level of…

  3. Jordanian Social Studies Teachers' Perceptions of Competency Needed for Implementing Technology in the Classroom

    ERIC Educational Resources Information Center

    Al Bataineh, Mohammad; Anderson, Sharon

    2015-01-01

    This study used a cross-sectional, ten-point Likert-type scale survey design, to examine the perception of Jordanian seventh to twelfth-grade social studies teachers of the competency needed for technology implementation in their classrooms. The instrument for this study was a modified version of a survey developed by Kelly (2003) called the…

  4. The Use of Technology by Nonformal Environmental Educators

    ERIC Educational Resources Information Center

    Peffer, Tamara Elizabeth; Bodzin, Alec M.; Smith, Judith Duffield

    2013-01-01

    This study examined the use of instructional and learning technologies by nonformal environmental educators. A 40-question survey was developed to inquire about practitioner demographics, technology use in practice, and beliefs about technology. The survey consisted of multiple choice, open-ended questions, and a Likert-type scale component--the…

  5. Stress in the Lives of College Women: "Lots to Do and Not Much Time"

    ERIC Educational Resources Information Center

    Larson, Elizabeth A.

    2006-01-01

    This study examined how activity and engagement qualities were related to stress. Experience sampling using e-mail pagers collected simultaneous ratings of stress and qualities of activity for 30 college women during 14 days. Surveys included narrative questions about activity types, feelings, and experience and Likert-type scales rating activity…

  6. PSYCHOLOGICAL CORRELATES IN DIALECTOLALIA.

    ERIC Educational Resources Information Center

    HURST, CHARLES G.; AND OTHERS

    A TOTAL OF 1,209 HOWARD UNIVERSITY FRESHMEN REPRESENTING 42 STATES WERE ADMINISTERED (1) A STANDARD AUDIOMETRIC EXAMINATION, (2) A SPEECH BATTERY, (3) A BATTERY OF PSYCHOLOGICAL TESTS, (4) A SPEECH AND LANGUAGE ATTITUDE INVENTORY IN THE FORM OF A 25-ITEM, LIKERT-TYPE SCALE, AND (5) A SOCIOECONOMIC ASSESSMENT INVENTORY CONTAINING 42 OPEN-ENDED AND…

  7. Investigation of Primary Students' Motivation Levels towards Science Learning

    ERIC Educational Resources Information Center

    Sevinc, Betul; Ozmen, Haluk; Yigit, Nevzat

    2011-01-01

    The present research was conducted with 518 students enrolled at the 6th, 7th and 8th classes of primary schools. A likert-type scale developed by Tuan, Chin and Shieh (2005) and translated into Turkish by Yilmaz and Cavas (2007) was used to examine the motivation levels of students towards science learning. Research findings revealed that gender,…

  8. Teacher Satisfaction and Teacher Retention in the State of Hawaii: A Mixed Method Study Using a Modified Delphi Design

    ERIC Educational Resources Information Center

    Pasalo, Ervin Castro

    2012-01-01

    The purpose of this study was to explore the aspects of professional experiences, which directly impact teacher satisfaction, dissatisfaction, and morale in the State of Hawaii Leeward District Campbell Complex. The use of a Teacher Satisfaction and Teacher Retention Questionnaire combined with a five point Likert-type scale survey was used to…

  9. College Female Perceptions of Career Directions.

    ERIC Educational Resources Information Center

    Weber, Joseph A.; Miller, Mary G.

    College home economics students are entering a greater variety of home economics and related professions upon graduation than ever before. Freshmen (N=542) and graduating seniors (N=463) enrolled in the College of Home Economics at Oklahoma State University completed a questionnaire that included multiple choice and Likert-type scale items as well…

  10. Adult Perceptions of Pain and Hunger Cries: A Synchrony of Arousal.

    ERIC Educational Resources Information Center

    Zeskind, Philip Sanford; And Others

    1985-01-01

    Male and female nonparent adults rated tape-recordings of initial, middle, and final 10-second segments of pain and hunger cries on four 7-point Likert-type scale items describing how urgent, arousing, aversive, and sick cry segments sounded. Results suggest that different segments of cries resulting from the same stimulus provide different…

  11. Within-Subject Comparison of Changes in a Pretest-Posttest Design

    ERIC Educational Resources Information Center

    Hennig, Christian; Mullensiefen, Daniel; Bargmann, Jens

    2010-01-01

    The authors propose a method to compare the influence of a treatment on different properties within subjects. The properties are measured by several Likert-type-scaled items. The results show that many existing approaches, such as repeated measurement analysis of variance on sum and mean scores, a linear partial credit model, and a graded response…

  12. Pro-Recreational Sex Morality, Religiosity, and Causal Attribution of Homosexual Attitudes.

    ERIC Educational Resources Information Center

    Embree, Robert A.

    Homosexual cognitive victimization is a term which emphasizes social evaluation of sexual behaviors judged in terms of sexual preference. Individual differences in cognitive victimization of homosexuals were examined in two studies. In the first study, undergraduate students (N=78) completed Likert-type rating scales measuring homosexual cognitive…

  13. Teachers' Perceptions of the Teaching of Acids and Bases in Swedish Upper Secondary Schools

    ERIC Educational Resources Information Center

    Drechsler, Michal; Van Driel, Jan

    2009-01-01

    We report in this paper on a study of chemistry teachers' perceptions of their teaching in upper secondary schools in Sweden, regarding models of acids and bases, especially the Bronsted and the Arrhenius model. A questionnaire consisting of a Likert-type scale was developed, which focused on teachers' knowledge of different models, knowledge of…

  14. Teacher Participation in Decision Making and Its Impact on School and Teachers

    ERIC Educational Resources Information Center

    Sarafidou, Jasmin-Olga; Chatziioannidis, Georgios

    2013-01-01

    Purpose: The purpose of this paper is to examine teacher involvement in different domains of decision making in Greek primary schools and explore associations with school and teacher variables. Design/methodology/approach: A survey employing self-administered questionnaires, with a Likert-type scale assessing teachers' actual and desired…

  15. Testing Parameter Invariance for Questionnaire Indices Using Confirmatory Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Schulz, Wolfram

    2006-01-01

    International studies like PISA use context student or school questionnaires to collect data on student family background, attitudes and learning context. Questionnaire constructs are typically measured using dichotomous or Likert-type items. Scaling of questionnaire items in order to obtain measures of family background, student attitudes or…

  16. Generalized IRT Models for Extreme Response Style

    ERIC Educational Resources Information Center

    Jin, Kuan-Yu; Wang, Wen-Chung

    2014-01-01

    Extreme response style (ERS) is a systematic tendency for a person to endorse extreme options (e.g., strongly disagree, strongly agree) on Likert-type or rating-scale items. In this study, we develop a new class of item response theory (IRT) models to account for ERS so that the target latent trait is free from the response style and the tendency…

  17. An Empirical Study of Pupils' Attitudes to Computers and Robots.

    ERIC Educational Resources Information Center

    Moore, J. L.

    1985-01-01

    Describes a study which utilized a Likert type questionnaire to assess seven scales of secondary pupils' attitudes toward computers and robotics (school, leisure, career, employment, social, threat, future) and investigated pupils' scores on functions of their sex, general academic ability, course of study, and microcomputer experience. (MBR)

  18. College of the Canyons Faculty and Staff Survey, Fall 2000.

    ERIC Educational Resources Information Center

    Gribbons, Barry C.; Dixon, P. Scott

    This survey was designed to acquire information on the opinions of college employees regarding various institutional departments. The questionnaire used both Likert-type and open-ended questions, with six response choices ranging on a scale from 1 to 5, from very dissatisfied to very satisfied to no opinion. Of the 640 questionnaires distributed…

  19. Estonian Science and Non-Science Students' Attitudes towards Mathematics at University Level

    ERIC Educational Resources Information Center

    Kaldo, Indrek; Reiska, Priit

    2012-01-01

    This article investigates the attitudes and beliefs towards studying mathematics by university level students. A total of 970 randomly chosen, first year, Estonian bachelor students participated in the study (of which 498 were science students). Data were collected using a Likert-type scale questionnaire and analysed with a respect to field of…

  20. Training Evaluation as an Integral Component of Training for Performance.

    ERIC Educational Resources Information Center

    Lapp, H. J., Jr.

    A training evaluation system should address four major areas: reaction, learning, behavior, and results. The training evaluation system at GPU Nuclear Corporation addresses each of these areas through practical approaches such as course and program evaluation. GPU's program evaluation instrument uses a Likert-type scale to assess task development,…

  1. Students and Their Teachers of Arabic: Beliefs about Language Learning.

    ERIC Educational Resources Information Center

    Kuntz, Patricia S.

    A study investigated beliefs about second language learning held by 27 adult students and 10 teachers of Arabic at the Yemen Language Center. The survey instrument consisted of 5 demographic statements and 47 statements concerning language learning in a Likert-type scaled response format. Results indicate students and teachers generally agreed…

  2. Beliefs about Language Learning Held by Students and Their Teacher (A Pilot Study).

    ERIC Educational Resources Information Center

    Kuntz, Patricia S.

    A study investigated the beliefs about second language learning among nine students of English as a Second Language (all female), and their teacher at Queen Arwa University (Yemen). The survey instrument consisted of five demographic statements and 47 statements concerning language learning in a Likert-type scaled response format. Results indicate…

  3. Attitudes of the Public and Citizen Advisory Committee Members Toward Land and Water Resources in the Maumee River Basin.

    ERIC Educational Resources Information Center

    Taylor, Calvin Lee

    The reported study was conducted to determine the extent to which active participants of a Citizen's Advisory Committee (CAC) were representative of the general public in land and water resource attitudes. All 39 members of the Maumee River Basin Level B CAC and a random sample of 400 Basin residents were given a Likert-type scale to measure their…

  4. Competencies of Vocational Teachers. A Factor Analysis of the Training Needs of Teachers of Occupational Education.

    ERIC Educational Resources Information Center

    Courtney, E. Wayne; Halfin, Harold H.

    To determine common training requirements of secondary-level vocational teachers, a factor analysis was made of responses by 40 randomly selected vocational teachers representing four states: Pennsylvania, Iowa, North Carolina, and New Jersey. Teacher responses consisted of the assignment of ratings to 40 items on a Likert-type scale. Ten teachers…

  5. Development and Initial Validation of the Medical Fear Survey-Short Version

    ERIC Educational Resources Information Center

    Olatunji, Bunmi O.; Ebesutani, Chad; Sawchuk, Craig N.; McKay, Dean; Lohr, Jeffrey M.; Kleinknecht, Ronald A.

    2012-01-01

    The present investigation employs item response theory (IRT) to develop an abbreviated Medical Fear Survey (MFS). Application of IRT analyses in Study 1 (n = 931) to the original 50-item MFS resulted in a 25-item shortened version. Examination of the location parameters also resulted in a reduction of the Likert-type scaling of the MFS by removing…

  6. Measuring Telephone Apprehension.

    ERIC Educational Resources Information Center

    Steele, Cam Monroe; Reinsch, N. L., Jr.

    An instrument for measuring telephone apprehension was developed to facilitate research into hypothesized relationships between communication apprehension and telephone apprehension. A set of 92 Likert-type items was adapted from previous communication apprehension scales and administered to 81 undergraduate students in a speech communication…

  7. Assessing Parent Satisfaction.

    ERIC Educational Resources Information Center

    Cleminshaw, Helen; Guidubaldi, John

    Although actual or projected satisfaction with parenting is important in determining whether a couple will become parents and how large their family will be, only minimal research has assessed parental satisfaction. The Cleminshaw-Guidubaldi Parent Satisfaction Scale, a 50-item Likert-type instrument designed to measure components of satisfaction…

  8. Reading Attitudes and Interests of Gifted and Talented Children in the Middle Grades.

    ERIC Educational Resources Information Center

    Link, Barbara R.

    The study investigated reading attitudes and interests of 30 fourth through ninth grade gifted and talented students. The data were gathered through the use of a questionnaire, which included statements with Likert type scale responses, multiple-choice, and short answers. Methods used to analyze the data were individual item analysis, analysis of…

  9. Student Teachers' Attitudes Concerning Understanding the Nature of Science in Turkey

    ERIC Educational Resources Information Center

    Sahin, Nurettin; Deniz, Sabahattin; Gorgen, Izzet

    2006-01-01

    Nature of science is defined as one of the directions of scientific literacy. The main aim of this study was to investigate both secondary school social and science branch post-graduate (non-thesis master) teacher candidates attitudes about the Nature of Science (NOS) and compare their attitudes towards NOS. A 12-item Likert type scale for teacher…

  10. Influence of Item Direction on Student Responses in Attitude Assessment.

    ERIC Educational Resources Information Center

    Campbell, Noma Jo; Grissom, Stephen

    To investigate the effects of wording in attitude test items, a five-point Likert-type rating scale was administered to 173 undergraduate education majors. The test measured attitudes toward college and self, and contained 38 positively-worded items. Thirty-eight negatively-worded items were also written to parallel the positive statements.…

  11. Operationalizing Contact Theory: Measuring Student Attitudes toward Desegregation.

    ERIC Educational Resources Information Center

    Green, Charles W.

    In order to develop a scale to measure student attitudes toward desegregation in their own schools, 61 Likert type statements were developed and administered to over 3000 black and white middle school students in five schools in a southwestern community. Gordon Allport's criteria, particularly his contact theory of desegregation, provided the…

  12. The Art Appreciation Component of Visual Literacy: Examples of Guided Approaches to Viewing Art.

    ERIC Educational Resources Information Center

    Demery, Marie

    Likert-type rating scales were designed and used to help college students perceive, understand, and value the beauty and content of a piece of art. The subjects for the project were 100 college students enrolled in two art appreciation courses at Texas College. Their classification ranged from freshman to senior, with majors mainly in business,…

  13. Music Practices and Teachers' Needs for Teaching Music in Public Preschools of South Korea

    ERIC Educational Resources Information Center

    Lee, Youngae

    2009-01-01

    The present study aimed to investigate the current music practices and teachers' needs for teaching music in public preschools of South Korea. The data were obtained from the public preschools in South Korea, where 66.7 percent (n = 606) of the total sample (N = 908) responded. The online survey consisted of 42 questions: Likert-type scales,…

  14. Parental Recall of Pre-School Behavior Related to ADHD and Disruptive Behavior Disorder

    ERIC Educational Resources Information Center

    Ercan, Eyup Sabri; Somer, Oya; Amado, Sonia; Thompson, Dennis

    2005-01-01

    The aim of this study was to examine the contribution of Age of Onset Criterion (AOC) to the diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) and disruptive behavior disorder. For this purpose, a 10-item Likert-type Parent Assessment of Pre-school Behavior Scale (PARPS), developed by the experimenters, was used to examine the presence…

  15. Curriculum Orientations of Home Economics Teachers.

    ERIC Educational Resources Information Center

    Cunningham, Rebecca; And Others

    A study collected descriptive information about 152 Nebraska home economics teachers and their curriculum orientation(s). The questionnaire was adapted from the Curriculum Orientation Profile designed by Babin (1979) and revised by Carlson (1991). Teachers responded to 45 statements on a Likert-type scale. Nine statements reflected each of five…

  16. Attitudes about Environmental Issues among Secondary Agriscience Students in Texas.

    ERIC Educational Resources Information Center

    Parker, Tina Farris; Herring, Don R.

    A Texas study examined the attitudes of 379 secondary agriscience students about environmental issues (76% response rate). The Spring 1993 survey questionnaire was developed from a literature review. A Likert-type scale was used for response measurement in sections 1-4; Section 5 consisted of statements related to personal and demographic…

  17. Evaluating the Trustworthiness of Self-Assessments.

    ERIC Educational Resources Information Center

    Long, James S.; Fransen, Steven C.

    A retrospective self-assessment used with 22 county Extension agents from western Washington who had participated in a three-day inservice education program in agronomy was evaluated. Each participant was asked to draw an S on a Likert-type scale to indicate where each person started at the beginning of the workshop and an N where they perceived…

  18. Pre-Service Mathematics Teachers' Belief Systems

    ERIC Educational Resources Information Center

    Haser, Cigdem; Dogan, Oguzhan

    2012-01-01

    The influence of mathematics teacher education programme courses on pre-service teachers' mathematics teaching belief systems before their field experience was initially investigated through a Likert-type scale. The impact of a third year general teaching methodologies course was then investigated through the responses pre-service teachers…

  19. Educational Scale-Making

    ERIC Educational Resources Information Center

    Nespor, Jan

    2004-01-01

    The article explores the complexities of educational scale-making. "Educational scales" are defined as the spatial and temporal orders generated as pupils and teachers move and are moved through educational systems; scales are "envelopes of spacetime" into which certain schoolbased identities (and not others) can be folded. Scale is thus both an…

  20. Raters & Rating Scales.

    ERIC Educational Resources Information Center

    Lopez, Winifred A.; Stone, Mark H.

    1998-01-01

    The first article in this section, "Rating Scales and Shared Meaning," by Winifred A. Lopez, discusses the analysis of rating scale data. The second article, "Rating Scale Categories: Dichotomy, Double Dichotomy, and the Number Two," by Mark H. Stone, argues that dichotomies in rating scales are more useful than multiple ratings. (SLD)

  1. Roles and responsibilities of nurse preceptors: Perception of preceptors and preceptees.

    PubMed

    Omer, Tagwa A; Suliman, Wafika A; Moola, Shehnaaz

    2016-01-01

    In this study setting, preceptors, who were clinical teaching assistants and hospital employed nurses assist through an interactive process preceptees, who were nursing students, in developing clinical skills and integration into the culture of the clinical area. Therefore, roles and responsibilities of preceptors should be clear and meet the expectations of preceptors and preceptees. This study aimed at comparing similarities and differences of perception to roles and responsibilities as held by nurse preceptors and their preceptees in relation to how important such roles and responsibilities are, and how frequently preceptors attend to the role. A self-administered questionnaire using Boyer's (2008) roles and responsibilities was completed by a convenience sample of 87 preceptee and 62 preceptors amounting to 66.9% and 77.5% response rate respectively. The questionnaire included 43 items and two 4-points Likert-type scales: "Importance of", and "frequency of attendance to roles". Two versions were developed: one for preceptors and the other for preceptees. The reliability (Alpha values) was .944 for the importance and .973 for the frequency of attendance scales. Mean scores indicated agreement among the two groups in relation to importance of, but to disagreement in relation to frequency of attendance to certain roles and responsibilities. Both groups perceived roles and responsibilities as important but varied with significant difference in rating preceptors' frequency of attendance to their roles as educators and facilitators.

  2. Occupational Cohort Time Scales

    PubMed Central

    Roth, H. Daniel

    2015-01-01

    Purpose: This study explores how highly correlated time variables (occupational cohort time scales) contribute to confounding and ambiguity of interpretation. Methods: Occupational cohort time scales were identified and organized through simple equations of three time scales (relational triads) and the connections between these triads (time scale web). The behavior of the time scales was examined when constraints were imposed on variable ranges and interrelationships. Results: Constraints on a time scale in a triad create high correlations between the other two time scales. These correlations combine with the connections between relational triads to produce association paths. High correlation between time scales leads to ambiguity of interpretation. Conclusions: Understanding the properties of occupational cohort time scales, their relational triads, and the time scale web is helpful in understanding the origins of otherwise obscure confounding bias and ambiguity of interpretation. PMID:25647318

  3. Small Scale Organic Techniques

    ERIC Educational Resources Information Center

    Horak, V.; Crist, DeLanson R.

    1975-01-01

    Discusses the advantages of using small scale experimentation in the undergraduate organic chemistry laboratory. Describes small scale filtration techniques as an example of a semi-micro method applied to small quantities of material. (MLH)

  4. Weak scale supersymmetry

    SciTech Connect

    Hall, L.J. California Univ., Berkeley, CA . Dept. of Physics)

    1990-11-12

    An introduction to the ideas and current state of weak scale supersymmetry is given. It is shown that LEP data on Z decays has already excluded two of the most elegant models of weak scale supersymmetry. 14 refs.

  5. On Quantitative Rorschach Scales.

    ERIC Educational Resources Information Center

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  6. Cross-scale morphology

    USGS Publications Warehouse

    Allen, Craig R.; Holling, Crawford S.; Garmestani, Ahjond S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.

    2013-01-01

    The scaling of physical, biological, ecological and social phenomena is a major focus of efforts to develop simple representations of complex systems. Much of the attention has been on discovering universal scaling laws that emerge from simple physical and geometric processes. However, there are regular patterns of departures both from those scaling laws and from continuous distributions of attributes of systems. Those departures often demonstrate the development of self-organized interactions between living systems and physical processes over narrower ranges of scale.

  7. Reading Graduated Scales.

    ERIC Educational Resources Information Center

    Hall, Lucien T., Jr.

    1982-01-01

    Ways of teaching students to read scales are presented as process instructions that are probably overlooked or taken for granted by most instructors. Scales on such devices as thermometers, rulers, spring scales, speedometers, and thirty-meter tape are discussed. (MP)

  8. The Positivity Scale

    ERIC Educational Resources Information Center

    Caprara, Gian Vittorio; Alessandri, Guido; Eisenberg, Nancy; Kupfer, A.; Steca, Patrizia; Caprara, Maria Giovanna; Yamaguchi, Susumu; Fukuzawa, Ai; Abela, John

    2012-01-01

    Five studies document the validity of a new 8-item scale designed to measure "positivity," defined as the tendency to view life and experiences with a positive outlook. In the first study (N = 372), the psychometric properties of Positivity Scale (P Scale) were examined in accordance with classical test theory using a large number of college…

  9. Extreme Scale Visual Analytics

    SciTech Connect

    Wong, Pak C.; Shen, Han-Wei; Pascucci, Valerio

    2012-05-08

    Extreme-scale visual analytics (VA) is about applying VA to extreme-scale data. The articles in this special issue examine advances related to extreme-scale VA problems, their analytical and computational challenges, and their real-world applications.

  10. Belt scales user's guide

    SciTech Connect

    Rosenberg, N.I. )

    1993-02-01

    A conveyor-belt scale provides a means of obtaining accurate weights of dry bulk materials without delaying other plant operations. In addition, for many applications a belt scale is the most cost-effective alternative among many choices for a weighing system. But a number of users are not comfortable with the accuracy of their belt scales. In cases of unsatisfactory scale performance, it is often possible to correct problems and achieve the accuracy that was expected. To have a belt scale system that is accurate, precise, and cost effective, practical experience has shown that certain basic requisites must be satisfied. These requisites include matching the scale capability to the needs of the application, selecting durable scale equipment and conveyor idlers, adopting improved conveyor support methods, employing superior scale installation and alignment techniques, and establishing and practicing an effective scale testing and performance monitoring program. The goal of the Belt Scale Users' Guide is to enable utilities to reap the benefits of consistently accurate output from their new or upgraded belt scale installations. Such benefits include eliminating incorrect payments for coal receipts, improving coal pile inventory data, providing better heat rate results to enhance plant efficiency and yield more economical power dispatch, and satisfying regulatory agencies. All these benefits can reduce the cost of power generation.

  11. Manual of Scaling Methods

    NASA Technical Reports Server (NTRS)

    Bond, Thomas H. (Technical Monitor); Anderson, David N.

    2004-01-01

    This manual reviews the derivation of the similitude relationships believed to be important to ice accretion and examines ice-accretion data to evaluate their importance. Both size scaling and test-condition scaling methods employing the resulting similarity parameters are described, and experimental icing tests performed to evaluate scaling methods are reviewed with results. The material included applies primarily to unprotected, unswept geometries, but some discussion of how to approach other situations is included as well. The studies given here and scaling methods considered are applicable only to Appendix-C icing conditions. Nearly all of the experimental results presented have been obtained in sea-level tunnels. Recommendations are given regarding which scaling methods to use for both size scaling and test-condition scaling, and icing test results are described to support those recommendations. Facility limitations and size-scaling restrictions are discussed. Finally, appendices summarize the air, water and ice properties used in NASA scaling studies, give expressions for each of the similarity parameters used and provide sample calculations for the size-scaling and test-condition scaling methods advocated.

  12. Scaled models, scaled frequencies, and model fitting

    NASA Astrophysics Data System (ADS)

    Roxburgh, Ian W.

    2015-12-01

    I show that given a model star of mass M, radius R, and density profile ρ(x) [x = r/R], there exists a two parameter family of models with masses Mk, radii Rk, density profile ρk(x) = λρ(x) and frequencies νknℓ = λ1/2νnℓ, where λ,Rk/RA are scaling factors. These models have different internal structures, but all have the same value of separation ratios calculated at given radial orders n, and all exactly satisfy a frequency matching algorithm with an offset function determined as part of the fitting procedure. But they do not satisfy ratio matching at given frequencies nor phase shift matching. This illustrates that erroneous results may be obtained when model fitting with ratios at given n values or frequency matching. I give examples from scaled models and from non scaled evolutionary models.

  13. Salzburger State Reactance Scale (SSR Scale)

    PubMed Central

    2015-01-01

    Abstract. This paper describes the construction and empirical evaluation of an instrument for measuring state reactance, the Salzburger State Reactance (SSR) Scale. The results of a confirmatory factor analysis supported a hypothesized three-factor structure: experience of reactance, aggressive behavioral intentions, and negative attitudes. Correlations with divergent and convergent measures support the validity of this structure. The SSR Subscales were strongly related to the other state reactance measures. Moreover, the SSR Subscales showed modest positive correlations with trait measures of reactance. The SSR Subscales correlated only slightly or not at all with neighboring constructs (e.g., autonomy, experience of control). The only exception was fairness scales, which showed moderate correlations with the SSR Subscales. Furthermore, a retest analysis confirmed the temporal stability of the scale. Suggestions for further validation of this questionnaire are discussed. PMID:27453806

  14. Scale and scaling in agronomy and environmental sciences

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Scale is of paramount importance in environmental studies, engineering, and design. The unique course covers the following topics: scale and scaling, methods and theories, scaling in soils and other porous media, scaling in plants and crops; scaling in landscapes and watersheds, and scaling in agro...

  15. Parabolic scaling beams.

    PubMed

    Gao, Nan; Xie, Changqing

    2014-06-15

    We generalize the concept of diffraction free beams to parabolic scaling beams (PSBs), whose normalized intensity scales parabolically during propagation. These beams are nondiffracting in the circular parabolic coordinate systems, and all the diffraction free beams of Durnin's type have counterparts as PSBs. Parabolic scaling Bessel beams with Gaussian apodization are investigated in detail, their nonparaxial extrapolations are derived, and experimental results agree well with theoretical predictions.

  16. Multi-scale renormalization

    NASA Astrophysics Data System (ADS)

    Ford, C.; Wiesendanger, C.

    1997-02-01

    The standard MS renormalization prescription is inadequate for dealing with multi-scale problems. To illustrate this we consider the computation of the effective potential in the Higgs-Yukawa model. It is argued that it is natural to employ a two-scale renormalization group. We give a modified version of a two-scale scheme introduced by Einhorn and Jones. In such schemes the beta functions necessarily contain potentially large logarithms of the RG scale ratios. For credible perturbation theory one must implement a large logarithms resummation on the beta functions themselves. We show how the integrability condition for the two RG equations allows one to perform this resummation.

  17. The Family Constellation Scale.

    ERIC Educational Resources Information Center

    Lemire, David

    The Family Constellation Scale (FC Scale) is an instrument that assesses perceived birth order in families. It can be used in counseling to help initiate conversations about various traits and assumptions that tend to characterize first-born, middle-born children, youngest-born, and only children. It provides both counselors and clients insights…

  18. INL Laboratory Scale Atomizer

    SciTech Connect

    C.R. Clark; G.C. Knighton; R.S. Fielding; N.P. Hallinan

    2010-01-01

    A laboratory scale atomizer has been built at the Idaho National Laboratory. This has proven useful for laboratory scale tests and has been used to fabricate fuel used in the RERTR miniplate experiments. This instrument evolved over time with various improvements being made ‘on the fly’ in a trial and error process.

  19. Scaling up as Catachresis

    ERIC Educational Resources Information Center

    Tobin, Joseph

    2005-01-01

    The metaphor of scaling up is the wrong one to use for describing and prescribing educational change. Many of the strategies being employed to achieve scaling up are counter-productive: they conceive of practitioners as delivery agents or consumers, rather than as co-constructors of change. An approach to educational innovation based on the…

  20. Thoughts on Scale

    ERIC Educational Resources Information Center

    Schoenfeld, Alan H.

    2015-01-01

    This essay reflects on the challenges of thinking about scale--of making sense of phenomena such as continuous professional development (CPD) at the system level, while holding on to detail at the finer grain size(s) of implementation. The stimuli for my reflections are three diverse studies of attempts at scale--an attempt to use ideas related to…

  1. Premarital Attitude Scale.

    ERIC Educational Resources Information Center

    Vancouver Board of School Trustees (British Columbia). Dept. of Planning and Evaluation.

    The thirty-one item questionnaire was developed to measure how prepared high school students are for marriage. The students are directed to read each statement and to select a response on a five point scale ranging from agreeing strongly to disagreeing strongly. The scale is scored to indicate three factors which are considered important for a…

  2. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  3. Teaching Satisfaction Scale

    ERIC Educational Resources Information Center

    Ho, Chung-Lim; Au, Wing-Tung

    2006-01-01

    The present study proposes a teaching satisfaction measure and examines the validity of its scores. The measure is based on the Life Satisfaction Scale (LSS). Scores on the five-item Teaching Satisfaction Scale (TSS) were validated on a sample of 202 primary and secondary school teachers and favorable psychometric properties were found. As…

  4. Teacher Observation Scales.

    ERIC Educational Resources Information Center

    Purdue Univ., Lafayette, IN. Educational Research Center.

    The Teacher Observation Scales include four instruments: Observer Rating Scale (ORS), Reading Strategies Check List, Arithmetic Strategies Check List, and Classroom Description. These instruments utilize trained observers to describe the teaching behavior, instructional strategies and physical characteristics in each classroom. On the ORS, teacher…

  5. New scale factor measure

    NASA Astrophysics Data System (ADS)

    Bousso, Raphael

    2012-07-01

    The computation of probabilities in an eternally inflating universe requires a regulator or “measure.” The scale factor time measure truncates the Universe when a congruence of timelike geodesics has expanded by a fixed volume factor. This definition breaks down if the generating congruence is contracting—a serious limitation that excludes from consideration gravitationally bound regions such as our own. Here we propose a closely related regulator which is well defined in the entire spacetime. The new scale factor cutoff restricts to events with a scale factor below a given value. Since the scale factor vanishes at caustics and crunches, this cutoff always includes an infinite number of disconnected future regions. We show that this does not lead to divergences. The resulting measure combines desirable features of the old scale factor cutoff and of the light-cone time cutoff, while eliminating some of the disadvantages of each.

  6. The inflationary energy scale

    NASA Astrophysics Data System (ADS)

    Liddle, Andrew R.

    1994-01-01

    The energy scale of inflation is of much interest, as it suggests the scale of grand unified physics, governs whether cosmological events such as topological defect formation can occur after inflation, and also determines the amplitude of gravitational waves which may be detectable using interferometers. The COBE results are used to limit the energy scale of inflation at the time large scale perturbations were imprinted. An exact dynamical treatment based on the Hamilton-Jacobi equations is then used to translate this into limits on the energy scale at the end of inflation. General constraints are given, and then tighter constraints based on physically motivated assumptions regarding the allowed forms of density perturbation and gravitational wave spectra. These are also compared with the values of familiar models.

  7. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  8. Composite rating scales.

    PubMed

    Martinez-Martin, Pablo

    2010-02-15

    Rating scales are instruments that are very frequently used by clinicians to perform patient assessments. Typically, rating scales grade the attribute on an ordinal level of measurement, i.e., a rank ordering, meaning that the numbers assigned to the different ranks (item scores) do not represent 'real numbers' or 'physical magnitudes'. Single-item scales have some advantages, such as simplicity and low respondent burden, but they may also suffer from disadvantages, such as ambiguous score meanings and low responsiveness. Multi-item scales, in contrast, seem more adequate for assessment of complex constructs, allowing for detailed evaluation. Total scores representing the value of the construct may be quite precise and thus the responsiveness of the scale may be high. The most common strategy for obtaining the total score is the sum of the item scores, a strategy that constitutes one of the most important problems with these types of scales. A summative score of ordinal figures is not a 'real magnitude' and may have little sense. This paper is a review of the theoretical frameworks of the main theories used to develop rating scales (Classical Test Theory and Item Response Theory). Bearing in mind that no alternative is perfect, additional research in this field and judicious decisions are called for.

  9. Allometric Scaling in Biology

    NASA Astrophysics Data System (ADS)

    Banavar, Jayanth

    2009-03-01

    The unity of life is expressed not only in the universal basis of inheritance and energetics at the molecular level, but also in the pervasive scaling of traits with body size at the whole-organism level. More than 75 years ago, Kleiber and Brody and Proctor independently showed that the metabolic rates, B, of mammals and birds scale as the three-quarter power of their mass, M. Subsequent studies showed that most biological rates and times scale as M-1/4 and M^1/4 respectively, and that these so called quarter-power scaling relations hold for a variety of organisms, from unicellular prokaryotes and eukaryotes to trees and mammals. The wide applicability of Kleiber's law, across the 22 orders of magnitude of body mass from minute bacteria to giant whales and sequoias, raises the hope that there is some simple general explanation that underlies the incredible diversity of form and function. We will present a general theoretical framework for understanding the relationship between metabolic rate, B, and body mass, M. We show how the pervasive quarter-power biological scaling relations arise naturally from optimal directed resource supply systems. This framework robustly predicts that: 1) whole organism power and resource supply rate, B, scale as M^3/4; 2) most other rates, such as heart rate and maximal population growth rate scale as M-1/4; 3) most biological times, such as blood circulation time and lifespan, scale as M^1/4; and 4) the average velocity of flow through the network, v, such as the speed of blood and oxygen delivery, scales as M^1/12. Our framework is valid even when there is no underlying network. Our theory is applicable to unicellular organisms as well as to large animals and plants. This work was carried out in collaboration with Amos Maritan along with Jim Brown, John Damuth, Melanie Moses, Andrea Rinaldo, and Geoff West.

  10. Sulfate scale dissolution

    SciTech Connect

    Morris, R.L.; Paul, J.M.

    1992-01-28

    This patent describes a method for removing barium sulfate scale. It comprises contacting the scale with an aqueous solution having a pH of about 8 to about 14 and consisting essentially of a chelating agent comprising a polyaminopolycarboxylic acid or salt of such an acid in a concentration of 0.1 to 1.0 M, and anions of a monocarboxylic acid selected form mercaptoacetic acid, hydroxyacetic acid, aminoacetic acid, or salicyclic acid in a concentration of 0.1 to 1.0 M and which is soluble in the solution under the selected pH conditions, to dissolve the scale.

  11. Methods of measuring adhesion for thermally grown oxide scales

    SciTech Connect

    Hou, P.Y.; Atkinson, A.

    1994-06-01

    High temperature alloys and coatings rely on the formation of adherent scales to protect against further oxidation, but scale spallation is often problematic. Despite the technical importance of the problem, ``practical adhesion``, which refers to the separation of the oxide from the metal, has mainly been treated qualitatively in the past. Various techniques now exist such that the subject can be assessed in quantitative or semi-quantitative terms. Some of the techniques are described in this paper, and their weakness and strength are discussed. The experimental methods addressed here include: tensile pulling, micro-indentation, scratch test, residual stress induced delamination, laser or shock wave induced spallation, double cantilever beam and several 4-point beam bending approaches. To date, there is not an universal, easy test for oxide adhesion measurement that can provide reproducible information on interfacial fracture energy for a variety of oxide/metal systems. Much experimentation is still needed to increase confidence in many of the existing tests, and the fundamental mechanics for some present techniques also require further development.

  12. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  13. Lifshitz scale anomalies

    NASA Astrophysics Data System (ADS)

    Arav, Igal; Chapman, Shira; Oz, Yaron

    2015-02-01

    We analyse scale anomalies in Lifshitz field theories, formulated as the relative cohomology of the scaling operator with respect to foliation preserving diffeomorphisms. We construct a detailed framework that enables us to calculate the anomalies for any number of spatial dimensions, and for any value of the dynamical exponent. We derive selection rules, and establish the anomaly structure in diverse universal sectors. We present the complete cohomologies for various examples in one, two and three space dimensions for several values of the dynamical exponent. Our calculations indicate that all the Lifshitz scale anomalies are trivial descents, called B-type in the terminology of conformal anomalies. However, not all the trivial descents are cohomologically non-trivial. We compare the conformal anomalies to Lifshitz scale anomalies with a dynamical exponent equal to one.

  14. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  15. Reconsidering earthquake scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J.; Wech, A.; Creager, K.; Obara, K.; Agnew, D.

    2016-06-01

    The relationship (scaling) between scalar moment, M0, and duration, T, potentially provides key constraints on the physics governing fault slip. The prevailing interpretation of M0-T observations proposes different scaling for fast (earthquakes) and slow (mostly aseismic) slip populations and thus fundamentally different driving mechanisms. We show that a single model of slip events within bounded slip zones may explain nearly all fast and slow slip M0-T observations, and both slip populations have a change in scaling, where the slip area growth changes from 2-D when too small to sense the boundaries to 1-D when large enough to be bounded. We present new fast and slow slip M0-T observations that sample the change in scaling in each population, which are consistent with our interpretation. We suggest that a continuous but bimodal distribution of slip modes exists and M0-T observations alone may not imply a fundamental difference between fast and slow slip.

  16. Scaling in Columnar Joints

    NASA Astrophysics Data System (ADS)

    Morris, Stephen

    2007-03-01

    Columnar jointing is a fracture pattern common in igneous rocks in which cracks self-organize into a roughly hexagonal arrangement, leaving behind an ordered colonnade. We report observations of columnar jointing in a laboratory analog system, desiccated corn starch slurries. Using measurements of moisture density, evaporation rates, and fracture advance rates, we suggest an advective-diffusive system is responsible for the rough scaling behavior of columnar joints. This theory explains the order of magnitude difference in scales between jointing in lavas and in starches. We investigated the scaling of average columnar cross-sectional areas in experiments where the evaporation rate was fixed using feedback methods. Our results suggest that the column area at a particular depth is related to both the current conditions, and hysteretically to the geometry of the pattern at previous depths. We argue that there exists a range of stable column scales allowed for any particular evaporation rate.

  17. Digital scale converter

    DOEpatents

    Upton, Richard G.

    1978-01-01

    A digital scale converter is provided for binary coded decimal (BCD) conversion. The converter may be programmed to convert a BCD value of a first scale to the equivalent value of a second scale according to a known ratio. The value to be converted is loaded into a first BCD counter and counted down to zero while a second BCD counter registers counts from zero or an offset value depending upon the conversion. Programmable rate multipliers are used to generate pulses at selected rates to the counters for the proper conversion ratio. The value present in the second counter at the time the first counter is counted to the zero count is the equivalent value of the second scale. This value may be read out and displayed on a conventional seven-segment digital display.

  18. Magnetron injection gun scaling

    NASA Astrophysics Data System (ADS)

    Lawson, W.

    1988-04-01

    A set of tradeoff equations was simplified to obtain scaling laws for magnetron injection guns (MIGs). The constraints are chosen to examine the maximum-peak-power capabilities of MIGs. The scaling laws are compared with exact solutions of the design equations and are supported by MIG simulations in which each MIG is designed to double the beam power of an existing design by adjusting one of the four fundamental parameters.

  19. Ensemble Pulsar Time Scale

    NASA Astrophysics Data System (ADS)

    Yin, D. S.; Gao, Y. P.; Zhao, S. H.

    2016-05-01

    Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observational data are not evenly sampled, and the internals between data points range from several hours to more than half a month. What's more, these data sets are sparse. And all these make it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, we use cubic spline interpolation to densify the data set, and make the intervals between data points even. Then, we employ the Vondrak filter to smooth the data set, and get rid of high-frequency noise, finally adopt the weighted average method to generate the ensemble pulsar time scale. The pulsar timing residuals represent clock difference between the pulsar time and atomic time, and the high precision pulsar timing data mean the clock difference measurement between the pulsar time and atomic time with a high signal to noise ratio, which is fundamental to generate pulsar time. We use the latest released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set to generate the ensemble pulsar time scale. This data set is from the newest NANOGRAV data release, which includes 9-year observational data of 37 millisecond pulsars using the 100-meter Green Bank telescope and 305-meter Arecibo telescope. We find that the algorithm used in this paper can lower the influence caused by noises in timing residuals, and improve long-term stability of pulsar time. Results show that the long-term (> 1 yr) frequency stability of the pulsar time is better than 3.4×10-15.

  20. The Improbability scale

    SciTech Connect

    Ritchie, David J.; /Fermilab

    2005-03-01

    The Improbability Scale (IS) is proposed as a way of communicating to the general public the improbability (and by implication, the probability) of events predicted as the result of scientific research. Through the use of the Improbability Scale, the public will be able to evaluate more easily the relative risks of predicted events and draw proper conclusions when asked to support governmental and public policy decisions arising from that research.

  1. Scaling, Universality, and Geomorphology

    NASA Astrophysics Data System (ADS)

    Dodds, Peter Sheridan; Rothman, Daniel H.

    Theories of scaling apply wherever similarity exists across many scales. This similarity may be found in geometry and in dynamical processes. Universality arises when the qualitative character of a system is sufficient to quantitatively predict its essential features, such as the exponents that characterize scaling laws. Within geomorphology, two areas where the concepts of scaling and universality have found application are the geometry of river networks and the statistical structure of topography. We begin this review with a pedagogical presentation of scaling and universality. We then describe recent progress made in applying these ideas to networks and topography. This overview leads to a synthesis that attempts a classification of surface and network properties based on generic mechanisms and geometric constraints. We also briefly review how scaling and universality have been applied to related problems in sedimentology-specifically, the origin of stromatolites and the relation of the statistical properties of submarine-canyon topography to the size distribution of turbidite deposits. Throughout the review, our intention is to elucidate not only the problems that can be solved using these concepts, but also those that cannot.

  2. Earthquake Scaling Relations

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Boettcher, M.; Richardson, E.

    2002-12-01

    Using scaling relations to understand nonlinear geosystems has been an enduring theme of Don Turcotte's research. In particular, his studies of scaling in active fault systems have led to a series of insights about the underlying physics of earthquakes. This presentation will review some recent progress in developing scaling relations for several key aspects of earthquake behavior, including the inner and outer scales of dynamic fault rupture and the energetics of the rupture process. The proximate observations of mining-induced, friction-controlled events obtained from in-mine seismic networks have revealed a lower seismicity cutoff at a seismic moment Mmin near 109 Nm and a corresponding upper frequency cutoff near 200 Hz, which we interpret in terms of a critical slip distance for frictional drop of about 10-4 m. Above this cutoff, the apparent stress scales as M1/6 up to magnitudes of 4-5, consistent with other near-source studies in this magnitude range (see special session S07, this meeting). Such a relationship suggests a damage model in which apparent fracture energy scales with the stress intensity factor at the crack tip. Under the assumption of constant stress drop, this model implies an increase in rupture velocity with seismic moment, which successfully predicts the observed variation in corner frequency and maximum particle velocity. Global observations of oceanic transform faults (OTFs) allow us to investigate a situation where the outer scale of earthquake size may be controlled by dynamics (as opposed to geologic heterogeneity). The seismicity data imply that the effective area for OTF moment release, AE, depends on the thermal state of the fault but is otherwise independent of fault's average slip rate; i.e., AE ~ AT, where AT is the area above a reference isotherm. The data are consistent with β = 1/2 below an upper cutoff moment Mmax that increases with AT and yield the interesting scaling relation Amax ~ AT1/2. Taken together, the OTF

  3. Comparative interrater reliability of Asian Stroke Disability Scale, modified Rankin Scale and Barthel Index in patients with brain infarction

    PubMed Central

    Ghandehari, Kavian; Ghandehari, Kosar; Saffarian-Toosi, Ghazaleh; Masoudinezhad, Shahram; Yazdani, Siamak; Nooraddin, Ali; Ebrahimzadeh, Saeed; Ahmadi, Fahimeh; Abrishamchi, Fatemeh

    2012-01-01

    BACKGROUND This study tried to develop an Asian Stroke Disability Scale (ASDS) and compared its interrater reliability with modified Rankin Scale (mRS) and Barthel Index (BI). METHODS Three items including self-care, mobility, and daily activities were selected as variables for development of the ASDS. The variables were provisionally graded on a 2- to 4-point scale based on the importance of each item. Each of the variables was categorized into 3 categories. Afterward, 125 rater-patient assessments for each scale (mRS, BI, and ASDS) were performed on 25 stroke patients by 5 raters. For categorization of functional impairment as minor or major, the scores of mRS, BI and ASDS were categorized as ≤ 2 and > 2, < 90 and ≥ 90, and < 3 and ≥ 3, respectively.125 rater-patient assessments for each of the mRS, BI, and ASDS were performed on 25 stroke patients by five raters. RESULTS The quantitative variability of BI, mRS, and ASDS scores was not significant (P = 0.379; P = 0.780; and P = 0.835, respectively). Interrater variability of mRS, BI, and ASDS scores based on qualitative categorization was not significant (P = 1.000; P = 0.978; and P = 0.901, respectively). Paired interrater variability of mRS, BI, and ASDS scores based on qualitative categorization was not significant (P > 0.05). CONCLUSION The ASDS is easy to use, requires less than 1 minute to complete and is as valid as mRS and BI in assessment of functional impairment of patients with stroke. PMID:23359790

  4. Fast ignition breakeven scaling.

    SciTech Connect

    Slutz, Stephen A.; Vesey, Roger Alan

    2005-01-01

    A series of numerical simulations have been performed to determine scaling laws for fast ignition break even of a hot spot formed by energetic particles created by a short pulse laser. Hot spot break even is defined to be when the fusion yield is equal to the total energy deposited in the hot spot through both the initial compression and the subsequent heating. In these simulations, only a small portion of a previously compressed mass of deuterium-tritium fuel is heated on a short time scale, i.e., the hot spot is tamped by the cold dense fuel which surrounds it. The hot spot tamping reduces the minimum energy required to obtain break even as compared to the situation where the entire fuel mass is heated, as was assumed in a previous study [S. A. Slutz, R. A. Vesey, I. Shoemaker, T. A. Mehlhorn, and K. Cochrane, Phys. Plasmas 7, 3483 (2004)]. The minimum energy required to obtain hot spot break even is given approximately by the scaling law E{sub T} = 7.5({rho}/100){sup -1.87} kJ for tamped hot spots, as compared to the previously reported scaling of E{sub UT} = 15.3({rho}/100){sup -1.5} kJ for untamped hotspots. The size of the compressed fuel mass and the focusability of the particles generated by the short pulse laser determines which scaling law to use for an experiment designed to achieve hot spot break even.

  5. Scaling of Thermoacoustic Refrigerators

    NASA Astrophysics Data System (ADS)

    Li, Y.; Zeegers, J. C. H.; ter Brake, H. J. M.

    2008-03-01

    The possibility of scaling-down thermoacoustic refrigerators is theoretically investigated. Standing-wave systems are considered as well as traveling-wave. In the former case, a reference system is taken that consists of a resonator tube (50 cm) with a closed end and a PVC stack (length 5 cm). Helium is used at a mean pressure of 10 bar and an amplitude of 1 bar. The resulting operating frequency is 1 kHz. The variation of the performance of the refrigerator when scaled down in size is computed under the prerequisites that the temperature drop over the stack or the energy flux or its density are fixed. The analytical results show that there is a limitation in scaling-down a standing-wave thermoacoustic refrigerator due to heat conduction. Similar scaling trends are considered in traveling-wave refrigerators. The traveling-wave reference system consists of a feedback inertance tube of 0.567 m long, inside diameter 78 mm, a compliance volume of 2830 cm3 and a 24 cm thermal buffer tube. The regenerator is sandwiched between two heat exchangers. The system is operated at 125 Hz and filled with 30 bar helium gas. Again, the thermal conductance forms a practical limitation in down-scaling.

  6. Universities scale like cities.

    PubMed

    van Raan, Anthony F J

    2013-01-01

    Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.

  7. Fire toxicity scaling

    SciTech Connect

    Braun, E.; Levin, B.C.; Paabo, M.; Gurman, J.; Holt, T.

    1987-02-01

    The toxicity of the thermal-decomposition products from two flexible polyurethane foams (with and without a fire retardant) and a cotton upholstery fabric was evaluated by a series of small-scale and large-scale tests single mock-up upholstery chair tests during smoldering or flaming decomposition. In addition other fire property data such as rates of heat release, effective heats of combustion, specific gas species yields, and smoke obscuration were measured. The degree of toxicity observed during and following the flaming tests (both large-scale room burns and the NBS Toxicity Tests) could be explained by a 3-Gas Model which includes the combined toxicological effects of CO, CO/sub 2/, and HCN. Essentially, no animal deaths were noted during the thirty minute exposures to the non-flaming or smoldering combustion products produced in the NBS Toxicity Test Method or the large-scale room test. In the large-scale room tests, little toxicological difference was noted between decomposition products from the burn room and a second room 12 meters away.

  8. Full Scale Tunnel model

    NASA Technical Reports Server (NTRS)

    1929-01-01

    Interior view of Full-Scale Tunnel (FST) model. (Small human figures have been added for scale.) On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel . 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow.

  9. Atomic Scale Plasmonic Switch.

    PubMed

    Emboras, Alexandros; Niegemann, Jens; Ma, Ping; Haffner, Christian; Pedersen, Andreas; Luisier, Mathieu; Hafner, Christian; Schimmel, Thomas; Leuthold, Juerg

    2016-01-13

    The atom sets an ultimate scaling limit to Moore's law in the electronics industry. While electronics research already explores atomic scales devices, photonics research still deals with devices at the micrometer scale. Here we demonstrate that photonic scaling, similar to electronics, is only limited by the atom. More precisely, we introduce an electrically controlled plasmonic switch operating at the atomic scale. The switch allows for fast and reproducible switching by means of the relocation of an individual or, at most, a few atoms in a plasmonic cavity. Depending on the location of the atom either of two distinct plasmonic cavity resonance states are supported. Experimental results show reversible digital optical switching with an extinction ratio of 9.2 dB and operation at room temperature up to MHz with femtojoule (fJ) power consumption for a single switch operation. This demonstration of an integrated quantum device allowing to control photons at the atomic level opens intriguing perspectives for a fully integrated and highly scalable chip platform, a platform where optics, electronics, and memory may be controlled at the single-atom level.

  10. Scale adaptive compressive tracking.

    PubMed

    Zhao, Pengpeng; Cui, Shaohui; Gao, Min; Fang, Dan

    2016-01-01

    Recently, the compressive tracking (CT) method (Zhang et al. in Proceedings of European conference on computer vision, pp 864-877, 2012) has attracted much attention due to its high efficiency, but it cannot well deal with the scale changing objects due to its constant tracking box. To address this issue, in this paper we propose a scale adaptive CT approach, which adaptively adjusts the scale of tracking box with the size variation of the objects. Our method significantly improves CT in three aspects: Firstly, the scale of tracking box is adaptively adjusted according to the size of the objects. Secondly, in the CT method, all the compressive features are supposed independent and equal contribution to the classifier. Actually, different compressive features have different confidence coefficients. In our proposed method, the confidence coefficients of features are computed and used to achieve different contribution to the classifier. Finally, in the CT method, the learning parameter λ is constant, which will result in large tracking drift on the occasion of object occlusion or large scale appearance variation. In our proposed method, a variable learning parameter λ is adopted, which can be adjusted according to the object appearance variation rate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of the proposed method compared to state-of-the-art tracking algorithms. PMID:27386298

  11. No-scale inflation

    NASA Astrophysics Data System (ADS)

    Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.

    2016-05-01

    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on R+{R}2 gravity, with a tilted spectrum of scalar perturbations: {n}s∼ 0.96, and small values of the tensor-to-scalar perturbation ratio r\\lt 0.1, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  12. Scales of rock permeability

    NASA Astrophysics Data System (ADS)

    Guéguen, Y.; Gavrilenko, P.; Le Ravalec, M.

    1996-05-01

    Permeability is a transport property which is currently measured in Darcy units. Although this unit is very convenient for most purposes, its use prevents from recognizing that permeability has units of length squared. Physically, the square root of permeability can thus be seen as a characteristic length or a characteristic pore size. At the laboratory scale, the identification of this characteristic length is a good example of how experimental measurements and theoretical modelling can be integrated. Three distinct identifications are of current use, relying on three different techniques: image analysis of thin sections, mercury porosimetry and nitrogen adsorption. In each case, one or several theoretical models allow us to derive permeability from the experimental data (equivalent channel models, statistical models, effective media models, percolation and network models). Permeability varies with pressure and temperature and this is a decisive point for any extrapolation to crustal conditions. As far as pressure is concerned, most of the effect is due to cracks and a model which does not incorporate this fact will miss its goal. Temperature induced modifications can be the result of several processes: thermal cracking (due to thermal expansion mismatch and anisotropy, or to fluid pressure build up), and pressure solution are the two main ones. Experimental data on pressure and temperature effects are difficult to obtain but they are urgently needed. Finally, an important issue is: up to which point are these small scale data and models relevant when considering formations at the oil reservoir scale, or at the crust scale? At larger scales the identification of the characteristic scale is also a major goal which is examined.

  13. Angular Scaling In Jets

    SciTech Connect

    Jankowiak, Martin; Larkoski, Andrew J.; /SLAC

    2012-02-17

    We introduce a jet shape observable defined for an ensemble of jets in terms of two-particle angular correlations and a resolution parameter R. This quantity is infrared and collinear safe and can be interpreted as a scaling exponent for the angular distribution of mass inside the jet. For small R it is close to the value 2 as a consequence of the approximately scale invariant QCD dynamics. For large R it is sensitive to non-perturbative effects. We describe the use of this correlation function for tests of QCD, for studying underlying event and pile-up effects, and for tuning Monte Carlo event generators.

  14. Scale invariance in biophysics

    NASA Astrophysics Data System (ADS)

    Stanley, H. Eugene

    2000-06-01

    In this general talk, we offer an overview of some problems of interest to biophysicists, medical physicists, and econophysicists. These include DNA sequences, brain plaques in Alzheimer patients, heartbeat intervals, and time series giving price fluctuations in economics. These problems have the common feature that they exhibit features that appear to be scale invariant. Particularly vexing is the problem that some of these scale invariant phenomena are not stationary-their statistical properties vary from one time interval to the next or form one position to the next. We will discuss methods, such as wavelet methods and multifractal methods, to cope with these problems. .

  15. xi-scaling

    SciTech Connect

    Gunion, J.F.

    1980-04-01

    A class of purely kinematical corrections to xi-scaling is exposed. These corrections are inevitably present in any realistic hadron model with spin and gauge invariance and lead to phenomenologically important M/sub hadron//sup 2//Q/sup 2/ corrections to Nachtmann moments.

  16. Scale, Composition, and Technology

    ERIC Educational Resources Information Center

    Victor, Peter A.

    2009-01-01

    Scale (gross domestic product), composition (goods and services), and technology (impacts per unit of goods and services) in combination are the proximate determinants in an economy of the resources used, wastes generated, and land transformed. In this article, we examine relationships among these determinants to understand better the contribution…

  17. Scaling the Salary Heights.

    ERIC Educational Resources Information Center

    McNamee, Mike

    1986-01-01

    Federal cutbacks have created new demand for fund-raisers everywhere. Educational fund-raisers are thinking about "pay for performance"--incentive-based pay plans that can help them retain, reward, and motivate talented fund raisers within the tight pay scales common at colleges and universities. (MLW)

  18. Build an Interplanetary Scale.

    ERIC Educational Resources Information Center

    Matthews, Catherine; And Others

    1997-01-01

    Describes an activity in which students use a bathroom scale and a long board to see how their weight changes on other planets and the moon. Materials list, procedures, tables of planet radii, comparative values, and gravitational ratios are provided. (DDR)

  19. Fundamentals of Zoological Scaling.

    ERIC Educational Resources Information Center

    Lin, Herbert

    1982-01-01

    The following animal characteristics are considered to determine how properties and characteristics of various systems change with system size (scaling): skeletal weight, speed of running, height and range of jumping, food consumption, heart rate, lifetime, locomotive efficiency, frequency of wing-flapping, and maximum sizes of flying and hovering…

  20. Allometric scaling of countries

    NASA Astrophysics Data System (ADS)

    Zhang, Jiang; Yu, Tongkui

    2010-11-01

    As huge complex systems consisting of geographic regions, natural resources, people and economic entities, countries follow the allometric scaling law which is ubiquitous in ecological, and urban systems. We systematically investigated the allometric scaling relationships between a large number of macroscopic properties and geographic (area), demographic (population) and economic (GDP, gross domestic production) sizes of countries respectively. We found that most of the economic, trade, energy consumption, communication related properties have significant super-linear (the exponent is larger than 1) or nearly linear allometric scaling relations with the GDP. Meanwhile, the geographic (arable area, natural resources, etc.), demographic (labor force, military age population, etc.) and transportation-related properties (road length, airports) have significant and sub-linear (the exponent is smaller than 1) allometric scaling relations with area. Several differences of power law relations with respect to the population between countries and cities were pointed out. First, population increases sub-linearly with area in countries. Second, the GDP increases linearly in countries but not super-linearly as in cities. Finally, electricity or oil consumption per capita increases with population faster than cities.

  1. Scaling up Education Reform

    ERIC Educational Resources Information Center

    Gaffney, Jon D. H.; Richards, Evan; Kustusch, Mary Bridget; Ding, Lin; Beichner, Robert J.

    2008-01-01

    The SCALE-UP (Student-Centered Activities for Large Enrollment for Undergraduate Programs) project was developed to implement reforms designed for small classes into large physics classes. Over 50 schools across the country, ranging from Wake Technical Community College to Massachusetts Institute of Technology (MIT), have adopted it for classes of…

  2. Scaling up Psycholinguistics

    ERIC Educational Resources Information Center

    Smith, Nathaniel J.

    2011-01-01

    This dissertation contains several projects, each addressing different questions with different techniques. In chapter 1, I argue that they are unified thematically by their goal of "scaling up psycholinguistics"; they are all aimed at analyzing large data-sets using tools that reveal patterns to propose and test mechanism-neutral hypotheses about…

  3. The Infant Rating Scale.

    ERIC Educational Resources Information Center

    Lindsay, G. A.

    1980-01-01

    A study was made of the usefulness of the Infant Rating Scale (IRS) in the early identification of learning difficulties. Thirteen hundred five-year-olds were rated by their teachers after one term in school. The structure of the IRS, its reliability, and predictive validity are examined. (Author/SJL)

  4. The Spiritual Competency Scale

    ERIC Educational Resources Information Center

    Robertson, Linda A.

    2010-01-01

    This study describes the development of the Spiritual Competency Scale, which was based on the Association for Spiritual, Ethical and Religious Values in Counseling's original Spiritual Competencies. Participants were 662 counseling students from religiously based and secular universities nationwide. Exploratory factor analysis revealed a 22-item,…

  5. Scales of mantle heterogeneity

    NASA Astrophysics Data System (ADS)

    Moore, J. C.; Akber-Knutson, S.; Konter, J.; Kellogg, J.; Hart, S.; Kellogg, L. H.; Romanowicz, B.

    2004-12-01

    A long-standing question in mantle dynamics concerns the scale of heterogeneity in the mantle. Mantle convection tends to both destroy (through stirring) and create (through melt extraction and subduction) heterogeneity in bulk and trace element composition. Over time, these competing processes create variations in geochemical composition along mid-oceanic ridges and among oceanic islands, spanning a range of scales from extremely long wavelength (for example, the DUPAL anomaly) to very small scale (for example, variations amongst melt inclusions). While geochemical data and seismic observations can be used to constrain the length scales of mantle heterogeneity, dynamical mixing calculations can illustrate the processes and timescales involved in stirring and mixing. At the Summer 2004 CIDER workshop on Relating Geochemical and Seismological Heterogeneity in the Earth's Mantle, an interdisciplinary group evaluated scales of heterogeneity in the Earth's mantle using a combined analysis of geochemical data, seismological data and results of numerical models of mixing. We mined the PetDB database for isotopic data from glass and whole rock analyses for the Mid-Atlantic Ridge (MAR) and the East Pacific Rise (EPR), projecting them along the ridge length. We examined Sr isotope variability along the East Pacific rise by looking at the difference in Sr ratio between adjacent samples as a function of distance between the samples. The East Pacific Rise exhibits an overall bowl shape of normal MORB characteristics, with higher values in the higher latitudes (there is, however, an unfortunate gap in sampling, roughly 2000 km long). These background characteristics are punctuated with spikes in values at various locations, some, but not all of which are associated with off-axis volcanism. A Lomb-Scargle periodogram for unevenly spaced data was utilized to construct a power spectrum of the scale lengths of heterogeneity along both ridges. Using the same isotopic systems (Sr, Nd

  6. Scaling Applications in hydrology

    NASA Astrophysics Data System (ADS)

    Gebremichael, Mekonnen

    2010-05-01

    Besides downscaling applications, scaling properties of hydrological fields can be used to address a variety of research questions. In this presentation, we will use scaling properties to address questions related to satellite evapotranspiration algorithms, precipitation-streamflow relationships, and hydrological model calibration. Most of the existing satellite-based evapotranspiration (ET) algorithms have been developed using fine-resolution Landsat TM and ASTER data. However, these algorithms are often applied to coarse-resolution MODIS data. Our results show that applying the satellite-based algorithms, which are developed at ASTER resolution, to MODIS resolution leads to ET estimates that (1) preserve the overall spatial pattern (spatial correlation in excess of 0.90), (2) increase the spatial standard deviation and maximum value, (3) have modest conditional bias: underestimate low ET rates (< 1 mm/day) and overestimate high ET rates; the overestimation is within 20%. The results emphasize the need for exploring alternatives for estimation of ET from MODIS. Understanding the relationship between the scaling properties of precipitation and streamflow is important in a number of applications. We present the results of a detailed river flow fluctuation analysis on daily records from 14 stations in the Flint River basin in Georgia in the United States with focus on effect of watershed area on long memory of river flow fluctuations. The areas of the watersheds draining to the stations range from 22 km2 to 19,606 km2. Results show that large watersheds have more persistent flow fluctuations and stronger long-term (time greater than scale break point) memory than small watersheds while precipitation time series shows weak long-term correlation. We conclude that a watershed acts as a 'filter' for a 'white noise' precipitation with more significant filtering in case of large watersheds. Finally, we compare the scaling properties of simulated and observed spatial soil

  7. Scaling macroscopic aquatic locomotion

    NASA Astrophysics Data System (ADS)

    Gazzola, Mattia; Argentina, Médéric; Mahadevan, L.

    2014-10-01

    Inertial aquatic swimmers that use undulatory gaits range in length L from a few millimetres to 30 metres, across a wide array of biological taxa. Using elementary hydrodynamic arguments, we uncover a unifying mechanistic principle characterizing their locomotion by deriving a scaling relation that links swimming speed U to body kinematics (tail beat amplitude A and frequency ω) and fluid properties (kinematic viscosity ν). This principle can be simply couched as the power law Re ~ Swα, where Re = UL/ν >> 1 and Sw = ωAL/ν, with α = 4/3 for laminar flows, and α = 1 for turbulent flows. Existing data from over 1,000 measurements on fish, amphibians, larvae, reptiles, mammals and birds, as well as direct numerical simulations are consistent with our scaling. We interpret our results as the consequence of the convergence of aquatic gaits to the performance limits imposed by hydrodynamics.

  8. Scaling macroscopic aquatic locomotion

    NASA Astrophysics Data System (ADS)

    Gazzola, Mattia; Argentina, Mederic; Mahadevan, Lakshminarayanan

    2014-11-01

    Inertial aquatic swimmers that use undulatory gaits range in length L from a few millimeters to 30 meters, across a wide array of biological taxa. Using elementary hydrodynamic arguments, we uncover a unifying mechanistic principle characterizing their locomotion by deriving a scaling relation that links swimming speed U to body kinematics (tail beat amplitude A and frequency ω) and fluid properties (kinematic viscosity ν). This principle can be simply couched as the power law Re ~ Swα , where Re = UL / ν >> 1 and Sw = ωAL / ν , with α = 4 / 3 for laminar flows, and α = 1 for turbulent flows. Existing data from over 1000 measurements on fish, amphibians, larvae, reptiles, mammals and birds, as well as direct numerical simulations are consistent with our scaling. We interpret our results as the consequence of the convergence of aquatic gaits to the performance limits imposed by hydrodynamics.

  9. ELECTRONIC PULSE SCALING CIRCUITS

    DOEpatents

    Cooke-Yarborough, E.H.

    1958-11-18

    Electronic pulse scaling circults of the klnd comprlsing a serles of bi- stable elements connected ln sequence, usually in the form of a rlng so as to be cycllcally repetitive at the highest scallng factor, are described. The scaling circuit comprises a ring system of bi-stable elements each arranged on turn-off to cause, a succeeding element of the ring to be turned-on, and one being arranged on turn-off to cause a further element of the ring to be turned-on. In addition, separate means are provided for applying a turn-off pulse to all the elements simultaneously, and for resetting the elements to a starting condition at the end of each cycle.

  10. An elastica arm scale.

    PubMed

    Bosi, F; Misseroni, D; Dal Corso, F; Bigoni, D

    2014-09-01

    The concept of a 'deformable arm scale' (completely different from a traditional rigid arm balance) is theoretically introduced and experimentally validated. The idea is not intuitive, but is the result of nonlinear equilibrium kinematics of rods inducing configurational forces, so that deflection of the arms becomes necessary for equilibrium, which would be impossible for a rigid system. In particular, the rigid arms of usual scales are replaced by a flexible elastic lamina, free to slide in a frictionless and inclined sliding sleeve, which can reach a unique equilibrium configuration when two vertical dead loads are applied. Prototypes designed to demonstrate the feasibility of the system show a high accuracy in the measurement of load within a certain range of use. Finally, we show that the presented results are strongly related to snaking of confined beams, with implications for locomotion of serpents, plumbing and smart oil drilling. PMID:25197248

  11. Fundamentals of zoological scaling

    NASA Astrophysics Data System (ADS)

    Lin, Herbert

    1982-01-01

    Most introductory physics courses emphasize highly idealized problems with unique well-defined answers. Though many textbooks complement these problems with estimation problems, few books present anything more than an elementary discussion of scaling. This paper presents some fundamentals of scaling in the zoological domain—a domain complex by any standard, but one also well suited to illustrate the power of very simple physical ideas. We consider the following animal characteristics: skeletal weight, speed of running, height and range of jumping, food consumption, heart rate, lifetime, locomotive efficiency, frequency of wing flapping, and maximum sizes of animals that fly and hover. These relationships are compared to zoological data and everyday experience, and match reasonably well.

  12. The Extragalactic Distance Scale

    NASA Astrophysics Data System (ADS)

    Livio, Mario; Donahue, Megan; Panagia, Nino

    1997-07-01

    Participants; Preface; Foreword; Early history of the distance scale problem, S. van den Bergh; Cosmology: From Hubble to HST, M. S. Turner; Age constraints nucleocosmochronology, J. Truran; The ages of globular clusters, P. Demarque; The linearity of the Hubble flow M. Postman; Gravitational lensing and the extragalactic distance scale, R. D. Blandford andT . Kundic; Using the cosmic microwave background to constrain the Hubble constant A. Lasenby and T M. Jones; Cepheids as distance indicators, N. R. Tanvir; The I-band Tully-Fisher relation and the Hubble constant, R. Giovanell; The calibration of type 1a supernovae as standard candles, A. Saha; Focusing in on the Hubble constant, G. A. Tammann & M. Federspiel; Interim report on the calibration of the Tully-Fisher relation in the HST Key Project to measure the Hubble constant, J. Mould et al.; Hubble Space Telescope Key Project on the extragalactic distance scale, W. L. Freedman, B. F. Madore and T R. C. Kennicutt; Novae as distance indicators, M. Livio; Verifying the planetary nebula luminosity function method, G. H. Jacoby; On the possible use of radio supernovae for distance determinations, K. W. Weiler et al.; Post-AGB stars as standard candles, H. Bond; Helium core flash at the tip of the red giant branch: a population II distance indicator, B. F. Madore, W. L. Freedman and T S. Sakai; Globular clusters as distance indicators, B. C. Whitmore; Detached eclipsing binaries as primary distance and age indicators, B. Paczynski; Light echoes: geometric measurement of galaxy distances, W. B. Sparks; The SBF survey of galaxy distances J. L. Tonry; Extragalactic distance scales: The long and short of it, V. Trimble.

  13. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.; Ruppert, S.

    2002-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by analyzing aftershock sequences in the Western U.S. and Turkey using two different techniques. First we examine the observed regional S-wave spectra by fitting with a parametric model (Walter and Taylor, 2002) with and without variable stress drop scaling. Because the aftershock sequences have common stations and paths we can examine the S-wave spectra of events by size to determine what type of apparent stress scaling, if any, is most consistent with the data. Second we use regional coda envelope techniques (e.g. Mayeda and Walter, 1996; Mayeda et al, 2002) on the same events to directly measure energy and moment. The coda techniques corrects for path and site effects using an empirical Green function technique and independent calibration with surface wave derived moments. Our hope is that by carefully analyzing a very large number of events in a consistent manner using two different techniques we can start to resolve this apparent stress scaling issue. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  14. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Mayeda, K.; Walter, W. R.

    2003-04-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by applying the same methodology to a series of datasets that spans roughly 10 orders in seismic moment, M0. We will summarize recent results using a coda envelope methodology of Mayeda et al, (2003) which provide the most stable source spectral estimates to date. This methodology eliminates the complicating effects of lateral path heterogeneity, source radiation pattern, directivity, and site response (e.g., amplification, f-max and kappa). We find that in tectonically active continental crustal areas the total radiated energy scales as M00.25 whereas in regions of relatively younger oceanic crust, the stress drop is generally lower and exhibits a 1-to-1 scaling with moment. In addition to answering a fundamental question in earthquake source dynamics, this study addresses how one would scale small earthquakes in a particular region up to a future, more damaging earthquake. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  15. Extreme Scale Visual Analytics

    SciTech Connect

    Steed, Chad A; Potok, Thomas E; Pullum, Laura L; Ramanathan, Arvind; Shipman, Galen M; Thornton, Peter E; Potok, Thomas E

    2013-01-01

    Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.

  16. Beyond the Planck Scale

    SciTech Connect

    Giddings, Steven B.

    2009-12-15

    I outline motivations for believing that important quantum gravity effects lie beyond the Planck scale at both higher energies and longer distances and times. These motivations arise in part from the study of ultra-high energy scattering, and also from considerations in cosmology. I briefly summarize some inferences about such ultra-planckian physics, and clues we might pursue towards the principles of a more fundamental theory addressing the known puzzles and paradoxes of quantum gravity.

  17. Is this scaling nonlinear?

    PubMed

    Leitão, J C; Miotto, J M; Gerlach, M; Altmann, E G

    2016-07-01

    One of the most celebrated findings in complex systems in the last decade is that different indexes y (e.g. patents) scale nonlinearly with the population x of the cities in which they appear, i.e. y∼x (β) ,β≠1. More recently, the generality of this finding has been questioned in studies that used new databases and different definitions of city boundaries. In this paper, we investigate the existence of nonlinear scaling, using a probabilistic framework in which fluctuations are accounted for explicitly. In particular, we show that this allows not only to (i) estimate β and confidence intervals, but also to (ii) quantify the evidence in favour of β≠1 and (iii) test the hypothesis that the observations are compatible with the nonlinear scaling. We employ this framework to compare five different models to 15 different datasets and we find that the answers to points (i)-(iii) crucially depend on the fluctuations contained in the data, on how they are modelled, and on the fact that the city sizes are heavy-tailed distributed. PMID:27493764

  18. Is this scaling nonlinear?

    PubMed Central

    2016-01-01

    One of the most celebrated findings in complex systems in the last decade is that different indexes y (e.g. patents) scale nonlinearly with the population x of the cities in which they appear, i.e. y∼xβ,β≠1. More recently, the generality of this finding has been questioned in studies that used new databases and different definitions of city boundaries. In this paper, we investigate the existence of nonlinear scaling, using a probabilistic framework in which fluctuations are accounted for explicitly. In particular, we show that this allows not only to (i) estimate β and confidence intervals, but also to (ii) quantify the evidence in favour of β≠1 and (iii) test the hypothesis that the observations are compatible with the nonlinear scaling. We employ this framework to compare five different models to 15 different datasets and we find that the answers to points (i)–(iii) crucially depend on the fluctuations contained in the data, on how they are modelled, and on the fact that the city sizes are heavy-tailed distributed. PMID:27493764

  19. Scaling body size fluctuations

    PubMed Central

    Giometto, Andrea; Altermatt, Florian; Carrara, Francesco; Maritan, Amos; Rinaldo, Andrea

    2013-01-01

    The size of an organism matters for its metabolic, growth, mortality, and other vital rates. Scale-free community size spectra (i.e., size distributions regardless of species) are routinely observed in natural ecosystems and are the product of intra- and interspecies regulation of the relative abundance of organisms of different sizes. Intra- and interspecies distributions of body sizes are thus major determinants of ecosystems’ structure and function. We show experimentally that single-species mass distributions of unicellular eukaryotes covering different phyla exhibit both characteristic sizes and universal features over more than four orders of magnitude in mass. Remarkably, we find that the mean size of a species is sufficient to characterize its size distribution fully and that the latter has a universal form across all species. We show that an analytical physiological model accounts for the observed universality, which can be synthesized in a log-normal form for the intraspecies size distributions. We also propose how ecological and physiological processes should interact to produce scale-invariant community size spectra and discuss the implications of our results on allometric scaling laws involving body mass. PMID:23487793

  20. Urban scaling in Europe.

    PubMed

    Bettencourt, Luís M A; Lobo, José

    2016-03-01

    Over the last few decades, in disciplines as diverse as economics, geography and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the former holds well the latter is a poor descriptor of European cities. We conclude with scenarios for the size and properties of future pan-European megacities and their implications for the economic productivity, technological sophistication and regional inequalities of an integrated European urban system. PMID:26984190

  1. Urban scaling in Europe

    PubMed Central

    Bettencourt, Luís M. A.; Lobo, José

    2016-01-01

    Over the last few decades, in disciplines as diverse as economics, geography and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the former holds well the latter is a poor descriptor of European cities. We conclude with scenarios for the size and properties of future pan-European megacities and their implications for the economic productivity, technological sophistication and regional inequalities of an integrated European urban system. PMID:26984190

  2. Mechanism for salt scaling

    NASA Astrophysics Data System (ADS)

    Valenza, John J., II

    Salt scaling is superficial damage caused by freezing a saline solution on the surface of a cementitious body. The damage consists of the removal of small chips or flakes of binder. The discovery of this phenomenon in the early 1950's prompted hundreds of experimental studies, which clearly elucidated the characteristics of this damage. In particular it was shown that a pessimum salt concentration exists, where a moderate salt concentration (˜3%) results in the most damage. Despite the numerous studies, the mechanism responsible for salt scaling has not been identified. In this work it is shown that salt scaling is a result of the large thermal expansion mismatch between ice and the cementitious body, and that the mechanism responsible for damage is analogous to glue-spalling. When ice forms on a cementitious body a bi-material composite is formed. The thermal expansion coefficient of the ice is ˜5 times that of the underlying body, so when the temperature of the composite is lowered below the melting point, the ice goes into tension. Once this stress exceeds the strength of the ice, cracks initiate in the ice and propagate into the surface of the cementitious body, removing a flake of material. The glue-spall mechanism accounts for all of the characteristics of salt scaling. In particular, a theoretical analysis is presented which shows that the pessimum concentration is a consequence of the effect of brine pockets on the mechanical properties of ice, and that the damage morphology is accounted for by fracture mechanics. Finally, empirical evidence is presented that proves that the glue-small mechanism is the primary cause of salt scaling. The primary experimental tool used in this study is a novel warping experiment, where a pool of liquid is formed on top of a thin (˜3 mm) plate of cement paste. Stresses in the plate, including thermal expansion mismatch, result in warping of the plate, which is easily detected. This technique revealed the existence of

  3. The Practicality of Behavioral Observation Scales, Behavioral Expectation Scales, and Trait Scales.

    ERIC Educational Resources Information Center

    Wiersma, Uco; Latham, Gary P.

    1986-01-01

    The practicality of three appraisal instruments was measured in terms of user preference, namely, behavioral observation scales (BOS), behavioral expectation scales (BES), and trait scales. In all instances, BOS were preferred to BES, and in all but two instances, BOS were viewed as superior to trait scales. (Author/ABB)

  4. Comparing the theoretical versions of the Beaufort scale, the T-Scale and the Fujita scale

    NASA Astrophysics Data System (ADS)

    Meaden, G. Terence; Kochev, S.; Kolendowicz, L.; Kosa-Kiss, A.; Marcinoniene, Izolda; Sioutas, Michalis; Tooming, Heino; Tyrrell, John

    2007-02-01

    2005 is the bicentenary of the Beaufort Scale and its wind-speed codes: the marine version in 1805 and the land version later. In the 1920s when anemometers had come into general use, the Beaufort Scale was quantified by a formula based on experiment. In the early 1970s two tornado wind-speed scales were proposed: (1) an International T-Scale based on the Beaufort Scale; and (2) Fujita's damage scale developed for North America. The International Beaufort Scale and the T-Scale share a common root in having an integral theoretical relationship with an established scientific basis, whereas Fujita's Scale introduces criteria that make its intensities non-integral with Beaufort. Forces on the T-Scale, where T stands for Tornado force, span the range 0 to 10 which is highly useful world wide. The shorter range of Fujita's Scale (0 to 5) is acceptable for American use but less convenient elsewhere. To illustrate the simplicity of the decimal T-Scale, mean hurricane wind speed of Beaufort 12 is T2 on the T-Scale but F1.121 on the F-Scale; while a tornado wind speed of T9 (= B26) becomes F4.761. However, the three wind scales can be uni-fied by either making F-Scale numbers exactly half the magnitude of T-Scale numbers [i.e. F'half = T / 2 = (B / 4) - 4] or by doubling the numbers of this revised version to give integral equivalence with the T-Scale. The result is a decimal formula F'double = T = (B / 2) - 4 named the TF-Scale where TF stands for Tornado Force. This harmonious 10-digit scale has all the criteria needed for world-wide practical effectiveness.

  5. Small-scale strength

    SciTech Connect

    Anderson, J.L.

    1995-11-01

    In the world of power project development there is a market for smaller scale cogeneration projects in the range of 1MW to 10MW. In the European Union alone, this range will account for about $25 Billion in value over the next 10 years. By adding the potential that exists in Eastern Europe, the numbers are even more impressive. In Europe, only about 7 percent of needed electrical power is currently produced through cogeneration installations; this is expected to change to around 15 percent by the year 2000. Less than one year ago, two equipment manufacturers formed Dutch Power Partners (DPP) to focus on the market for industrial cogeneration throughout Europe.

  6. Reconsidering Fault Slip Scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J. S.; Wech, A.; Creager, K. C.; Obara, K.; Agnew, D. C.

    2015-12-01

    The scaling of fault slip events given by the relationship between the scalar moment M0, and duration T, potentially provides key constraints on the underlying physics controlling slip. Many studies have suggested that measurements of M0 and T are related as M0=KfT3 for 'fast' slip events (earthquakes) and M0=KsT for 'slow' slip events, in which Kf and Ks are proportionality constants, although some studies have inferred intermediate relations. Here 'slow' and 'fast' refer to slip front propagation velocities, either so slow that seismic radiation is too small or long period to be measurable or fast enough that dynamic processes may be important for the slip process and measurable seismic waves radiate. Numerous models have been proposed to explain the differing M0-T scaling relations. We show that a single, simple dislocation model of slip events within a bounded slip zone may explain nearly all M0-T observations. Rather than different scaling for fast and slow populations, we suggest that within each population the scaling changes from M0 proportional to T3 to T when the slipping area reaches the slip zone boundaries and transitions from unbounded, 2-dimensional to bounded, 1-dimensional growth. This transition has not been apparent previously for slow events because data have sampled only the bounded regime and may be obscured for earthquakes when observations from multiple tectonic regions are combined. We have attempted to sample the expected transition between bounded and unbounded regimes for the slow slip population, measuring tremor cluster parameters from catalogs for Japan and Cascadia and using them as proxies for small slow slip event characteristics. For fast events we employed published earthquake slip models. Observations corroborate our hypothesis, but highlight observational difficulties. We find that M0-T observations for both slow and fast slip events, spanning 12 orders of magnitude in M0, are consistent with a single model based on dislocation

  7. Soil organic carbon across scales.

    PubMed

    O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B

    2015-10-01

    Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management.

  8. Soil organic carbon across scales.

    PubMed

    O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B

    2015-10-01

    Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management. PMID:25918852

  9. Scaling in Transportation Networks

    PubMed Central

    Louf, Rémi; Roth, Camille; Barthelemy, Marc

    2014-01-01

    Subway systems span most large cities, and railway networks most countries in the world. These networks are fundamental in the development of countries and their cities, and it is therefore crucial to understand their formation and evolution. However, if the topological properties of these networks are fairly well understood, how they relate to population and socio-economical properties remains an open question. We propose here a general coarse-grained approach, based on a cost-benefit analysis that accounts for the scaling properties of the main quantities characterizing these systems (the number of stations, the total length, and the ridership) with the substrate's population, area and wealth. More precisely, we show that the length, number of stations and ridership of subways and rail networks can be estimated knowing the area, population and wealth of the underlying region. These predictions are in good agreement with data gathered for about subway systems and more than railway networks in the world. We also show that train networks and subway systems can be described within the same framework, but with a fundamental difference: while the interstation distance seems to be constant and determined by the typical walking distance for subways, the interstation distance for railways scales with the number of stations. PMID:25029528

  10. Static Scale Conversion (SSC)

    2007-01-19

    The Static Scale Conversion (SSC) software is a unique enhancement to the AIMVEE system. It enables a SSC to weigh and measure vehicles and cargo dynamically (i.e., as they pass over the large scale. Included in the software is the AIMVEE computer code base. The SSC and AIMVEE computer system electronically continue to retrieve deployment information, identify vehicle automatically and determine total weight, individual axle weights, axle spacing and center-of-balance for any wheeled vehicle inmore » motion. The AIMVEE computer code system can also perform these functions statically for both wheel vehicles and cargo with information. The AIMVEE computer code system incorporates digital images and applies cubing algorithms to determine length, width, height for cubic dimensions of both vehicle and cargo. Once all this information is stored, it electronically links to data collection and dissemination systems to provide “actual” weight and measurement information for planning, deployment, and in-transit visibility.« less

  11. Static Scale Conversion (SSC)

    SciTech Connect

    2007-01-19

    The Static Scale Conversion (SSC) software is a unique enhancement to the AIMVEE system. It enables a SSC to weigh and measure vehicles and cargo dynamically (i.e., as they pass over the large scale. Included in the software is the AIMVEE computer code base. The SSC and AIMVEE computer system electronically continue to retrieve deployment information, identify vehicle automatically and determine total weight, individual axle weights, axle spacing and center-of-balance for any wheeled vehicle in motion. The AIMVEE computer code system can also perform these functions statically for both wheel vehicles and cargo with information. The AIMVEE computer code system incorporates digital images and applies cubing algorithms to determine length, width, height for cubic dimensions of both vehicle and cargo. Once all this information is stored, it electronically links to data collection and dissemination systems to provide “actual” weight and measurement information for planning, deployment, and in-transit visibility.

  12. Aging scaled Brownian motion.

    PubMed

    Safdari, Hadiseh; Chechkin, Aleksei V; Jafari, Gholamreza R; Metzler, Ralf

    2015-04-01

    Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time. PMID:25974439

  13. Returns to Scale and Economies of Scale: Further Observations.

    ERIC Educational Resources Information Center

    Gelles, Gregory M.; Mitchell, Douglas W.

    1996-01-01

    Maintains that most economics textbooks continue to repeat past mistakes concerning returns to scale and economies of scale under assumptions of constant and nonconstant input prices. Provides an adaptation for a calculus-based intermediate microeconomics class that demonstrates the pointwise relationship between returns to scale and economies of…

  14. Global scale precipitation from monthly to centennial scales: empirical space-time scaling analysis, anthropogenic effects

    NASA Astrophysics Data System (ADS)

    de Lima, Isabel; Lovejoy, Shaun

    2016-04-01

    The characterization of precipitation scaling regimes represents a key contribution to the improved understanding of space-time precipitation variability, which is the focus here. We conduct space-time scaling analyses of spectra and Haar fluctuations in precipitation, using three global scale precipitation products (one instrument based, one reanalysis based, one satellite and gauge based), from monthly to centennial scales and planetary down to several hundred kilometers in spatial scale. Results show the presence - similarly to other atmospheric fields - of an intermediate "macroweather" regime between the familiar weather and climate regimes: we characterize systematically the macroweather precipitation temporal and spatial, and joint space-time statistics and variability, and the outer scale limit of temporal scaling. These regimes qualitatively and quantitatively alternate in the way fluctuations vary with scale. In the macroweather regime, the fluctuations diminish with time scale (this is important for seasonal, annual, and decadal forecasts) while anthropogenic effects increase with time scale. Our approach determines the time scale at which the anthropogenic signal can be detected above the natural variability noise: the critical scale is about 20 - 40 yrs (depending on the product, on the spatial scale). This explains for example why studies that use data covering only a few decades do not easily give evidence of anthropogenic changes in precipitation, as a consequence of warming: the period is too short. Overall, while showing that precipitation can be modeled with space-time scaling processes, our results clarify the different precipitation scaling regimes and further allow us to quantify the agreement (and lack of agreement) of the precipitation products as a function of space and time scales. Moreover, this work contributes to clarify a basic problem in hydro-climatology, which is to measure precipitation trends at decadal and longer scales and to

  15. Absolute flux scale for radioastronomy

    SciTech Connect

    Ivanov, V.P.; Stankevich, K.S.

    1986-07-01

    The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized.

  16. [Research progress on hydrological scaling].

    PubMed

    Liu, Jianmei; Pei, Tiefan

    2003-12-01

    With the development of hydrology and the extending effect of mankind on environment, scale issue has become a great challenge to many hydrologists due to the stochasticism and complexity of hydrological phenomena and natural catchments. More and more concern has been given to the scaling issues to gain a large-scale (or small-scale) hydrological characteristic from a certain known catchments, but hasn't been solved successfully. The first part of this paper introduced some concepts about hydrological scale, scale issue and scaling. The key problem is the spatial heterogeneity of catchments and the temporal and spatial variability of hydrological fluxes. Three approaches to scale were put forward in the third part, which were distributed modeling, fractal theory and statistical self similarity analyses. Existing problems and future research directions were proposed in the last part.

  17. MULTIPLE SCALES FOR SUSTAINABLE RESULTS

    EPA Science Inventory

    This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...

  18. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  19. Content validation of a standardized algorithm for ostomy care.

    PubMed

    Beitz, Janice; Gerlach, Mary; Ginsburg, Pat; Ho, Marianne; McCann, Eileen; Schafer, Vickie; Scott, Vera; Stallings, Bobbie; Turnbull, Gwen

    2010-10-01

    The number of ostomy care clinician experts is limited and the majority of ostomy care is provided by non-specialized clinicians or unskilled caregivers and family. The purpose of this study was to obtain content validation data for a new standardized algorithm for ostomy care developed by expert wound ostomy continence nurse (WOCN) clinicians. After face validity was established using overall review and suggestions from WOCN experts, 166 WOCNs self-identified as having expertise in ostomy care were surveyed online for 6 weeks in 2009. Using a cross-sectional, mixed methods study design and a 30-item instrument with a 4-point Likert-type scale, the participants were asked to quantify the degree of validity of the Ostomy Algorithm's decisions and components. Participants' open-ended comments also were thematically analyzed. Using a scale of 1 to 4, the mean score of the entire algorithm was 3.8 (4 = relevant/very relevant). The algorithm's content validity index (CVI) was 0.95 (out of 1.0). Individual component mean scores ranged from 3.59 to 3.91. Individual CVIs ranged from 0.90 to 0.98. Qualitative data analysis revealed themes of difficulty associated with algorithm formatting, especially orientation and use of the Studio Alterazioni Cutanee Stomali (Study on Peristomal Skin Lesions [SACS™ Instrument]) and the inability of algorithms to capture all individual patient attributes affecting ostomy care. Positive themes included content thoroughness and the helpful clinical photos. Suggestions were offered for algorithm improvement. Study results support the strong content validity of the algorithm and research to ascertain its construct validity and effect on care outcomes is warranted.

  20. Scaling aircraft noise perception.

    NASA Technical Reports Server (NTRS)

    Ollerhead, J. B.

    1973-01-01

    Following a brief review of the background to the study, an extensive experiment is described which was undertaken to assess the practical differences between numerous alternative methods for calculating the perceived levels of individual aircraft flyover wounds. One hundred and twenty recorded sounds, including jets, turboprops, piston aircraft and helicopters were rated by a panel of subjects in a pair comparison test. The results were analyzed to evaluate a number of noise rating procedures, in terms of their ability to accurately estimate both relative and absolute perceived noise levels over a wider dynamic range (84-115 dB SPL) than had generally been used in previous experiments. Performances of the different scales were examined in detail for different aircraft categories, and the merits of different band level summation procedures, frequency weighting functions, duration and tone corrections were investigated.

  1. Indian scales and inventories

    PubMed Central

    Venkatesan, S.

    2010-01-01

    This conceptual, perspective and review paper on Indian scales and inventories begins with clarification on the historical and contemporary meanings of psychometry before linking itself to the burgeoning field of clinimetrics in their applications to the practice of clinical psychology and psychiatry. Clinimetrics is explained as a changing paradigm in the design, administration, and interpretation of quantitative tests, techniques or procedures applied to measurement of clinical variables, traits and processes. As an illustrative sample, this article assembles a bibliographic survey of about 105 out of 2582 research papers (4.07%) scanned through 51 back dated volumes covering 185 issues related to clinimetry as reviewed across a span of over fifty years (1958-2009) in the Indian Journal of Psychiatry. A content analysis of the contributions across distinct categories of mental measurements is explained before linkages are proposed for future directions along these lines. PMID:21836709

  2. Biological scaling and physics.

    PubMed

    Rau, A R P

    2002-09-01

    Kleiber's law in biology states that the specific metabolic rate (metabolic rate per unit mass) scales as M- 1/4 in terms of the mass M of the organism. A long-standing puzzle is the (- 1/4) power in place of the usual expectation of (- 1/3) based on the surface to volume ratio in three-dimensions. While recent papers by physicists have focused exclusively on geometry in attempting to explain the puzzle, we consider here a specific law of physics that governs fluid flow to show how the (- 1/4) power arises under certain conditions. More generally, such a line of approach that identifies a specific physical law as involved and then examines the implications of a power law may illuminate better the role of physics in biology.

  3. Galactic-scale civilization

    NASA Technical Reports Server (NTRS)

    Kuiper, T. B. H.

    1980-01-01

    Evolutionary arguments are presented in favor of the existence of civilization on a galactic scale. Patterns of physical, chemical, biological, social and cultural evolution leading to increasing levels of complexity are pointed out and explained thermodynamically in terms of the maximization of free energy dissipation in the environment of the organized system. The possibility of the evolution of a global and then a galactic human civilization is considered, and probabilities that the galaxy is presently in its colonization state and that life could have evolved to its present state on earth are discussed. Fermi's paradox of the absence of extraterrestrials in light of the probability of their existence is noted, and a variety of possible explanations is indicated. Finally, it is argued that although mankind may be the first occurrence of intelligence in the galaxy, it is unjustified to presume that this is so.

  4. L-Scaling: An Update.

    ERIC Educational Resources Information Center

    Blankmeyer, Eric

    L-scaling is introduced as a technique for determining the weights in weighted averages or scaled scores for T joint observations on K variables. The technique is so named because of its formal resemblance to the Leontief matrix of mathematical economics. L-scaling is compared to several widely-used procedures for data reduction, and the…

  5. The Gains from Vertical Scaling

    ERIC Educational Resources Information Center

    Briggs, Derek C.; Domingue, Ben

    2013-01-01

    It is often assumed that a vertical scale is necessary when value-added models depend upon the gain scores of students across two or more points in time. This article examines the conditions under which the scale transformations associated with the vertical scaling process would be expected to have a significant impact on normative interpretations…

  6. Westside Test Anxiety Scale Validation

    ERIC Educational Resources Information Center

    Driscoll, Richard

    2007-01-01

    The Westside Test Anxiety Scale is a brief, ten item instrument designed to identify students with anxiety impairments who could benefit from an anxiety-reduction intervention. The scale items cover self-assessed anxiety impairment and cognitions which can impair performance. Correlations between anxiety-reduction as measured by the scale and…

  7. Full Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)

  8. Full Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).

  9. Scaling up: Assessing social impacts at the macro-scale

    SciTech Connect

    Schirmer, Jacki

    2011-04-15

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  10. The Cross-Scale Mission

    SciTech Connect

    Baumjohann, W.; Nakamura, R.; Horbury, T.; Schwartz, S.; Canu, P.; Roux, A.; Vaivads, A.

    2009-06-16

    Collisionless space plasmas exhibit complex behavior on many scales. Fortunately, one can identify a small number of processes and phenomena, essentially shocks, reconnection and turbulence that play a predominant role in the dynamics of a plasma. These processes act to transfer energy between locations, scales and modes, a transfer characterized by variability and three-dimensional structure on at least three scales: electron kinetic, ion kinetic and fluid scale. The nonlinear interaction between physical processes at these scales is the key to understanding these phenomena. Current and upcoming multi-spacecraft missions such as Cluster, THEMIS, and MMS only study three-dimensional variations on one scale at any given time, but one needs to measure the three scales simultaneously to understand the energy transfer processes and the coupling and interaction between the different scales. A mission called Cross-Scale would comprise three nested groups, each consisting of up to four spacecraft. Each group would have a different spacecraft separation, at approximately the electron and ion gyro radii, and at the larger magnetohydrodynamic or fluid scale. One would therefore be able to measure simultaneously variations on all three important physical scales, for the first time. With the spacecraft traversing key regions of near-Earth space, namely solar wind, bow shock, magnetosheath, magnetopause and magnetotail, all three aforementioned processes can be studied.

  11. Composite Health Plan Quality Scales

    PubMed Central

    Caldis, Todd

    2007-01-01

    This study employs exploratory factor analysis and scale construction methods with commercial Health Plan Employers Data Information Set (HEDIS®) process of care and outcome measures from 1999 to uncover evidence for a unidimensional composite health maintenance organization (HMO) quality scale. Summated scales by categories of care are created and are then used in a factor analysis that has a single factor solution. The category of care scales were used to construct a summated composite scale which exhibits strong evidence of internal consistency (alpha= 0.90). External validity of the composite quality scale was checked by regressing the composite scale on Consumer Assessment of Healthcare Providers and Systems (CAHPS®) survey results for 1999. PMID:17645158

  12. Solar system to scale

    NASA Astrophysics Data System (ADS)

    Gerwig López, Susanne

    2016-04-01

    One of the most important successes in astronomical observations has been to determine the limit of the Solar System. It is said that the first man able to measure the distance Earth-Sun with only a very slight mistake, in the second century BC, was the wise Greek man Aristarco de Samos. Thanks to Newtońs law of universal gravitation, it was possible to measure, with a little margin of error, the distances between the Sun and the planets. Twelve-year old students are very interested in everything related to the universe. However, it seems too difficult to imagine and understand the real distances among the different celestial bodies. To learn the differences among the inner and outer planets and how far away the outer ones are, I have considered to make my pupils work on the sizes and the distances in our solar system constructing it to scale. The purpose is to reproduce our solar system to scale on a cardboard. The procedure is very easy and simple. Students of first year of ESO (12 year-old) receive the instructions in a sheet of paper (things they need: a black cardboard, a pair of scissors, colored pencils, a ruler, adhesive tape, glue, the photocopies of the planets and satellites, the measurements they have to use). In another photocopy they get the pictures of the edge of the sun, the planets, dwarf planets and some satellites, which they have to color, cut and stick on the cardboard. This activity is planned for both Spanish and bilingual learning students as a science project. Depending on the group, they will receive these instructions in Spanish or in English. When the time is over, the students bring their works on their cardboard to the class. They obtain a final mark: passing, good or excellent, depending on the accuracy of the measurements, the position of all the celestial bodies, the asteroids belts, personal contributions, etc. If any of the students has not followed the instructions they get the chance to remake it again properly, in order not

  13. Enhanced superconductivity at the interface of W/Sr 2RuO4 point contacts

    NASA Astrophysics Data System (ADS)

    Wang, He; Lou, Weijian; Luo, Jiawei; Wei, Jian; Liu, Y.; Ortmann, J. E.; Mao, Z. Q.

    2015-05-01

    Differential resistance measurements are conducted for point contacts (PCs) between the Sr2RuO4 (SRO) single crystal and the tungsten tip approaching along the c axis direction of the crystal. Since the contact is made at liquid helium temperature and the tungsten tip is hard enough to penetrate through the surface layer, consistent superconducting features are observed. First, with the tip pushed towards the crystal, the zero-bias conductance peak (ZBCP) due to Andreev reflection at the normal-superconducting interface increases from 3% to more than 20%, much larger than previously reported, and extends to temperatures higher than the bulk transition temperature. Reproducible ZBCP within 0.2 mV may also help determine the gap value of SRO, on which no consensus has been reached. Second, the logarithmic background can be fitted with the Altshuler-Aronov theory of electron-electron interaction for tunneling into quasi-two-dimensional electron systems. Feasibility of such fitting confirms that spectroscopic information such as density of states is probed, and electronic temperature retrieved from such fitting can be important to analyze the PC spectra. Third, at bias much higher than 0.2 mV there are conductance dips due to the critical current effect. These dips persist up to 6.2 K, possibly due to enhanced superconductivity under uniaxial pressure.

  14. Study of concrete's behavior under 4-point bending load using Coda Wave Interferometry (CWI) analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Abraham, O.; Chapeleau, X.; Cottineau, L.-M.; Tournat, V.; Le Duff, A.; Lascoup, B.; Durand, O.

    2013-01-01

    Coda Wave Interferometry (CWI) is an ultrasonic NDT method suitable for complex material such as concrete that can precisely measure small propagation velocity variation (10-2%). By measuring variation of propagation velocity in concrete caused by acoustoelasticity phenomena, CWI analysis can be used to monitor concrete's internal stress level. For the first time, CWI is used to measure propagation velocity variations due to a stress field in a concrete beam under four-points bending test, which contains simultaneously compressive and tensile stress. Embedded optical-fiber sensors, strain gauges are used in the experiment, in order to confirm and validate the CWI analysis result. Thermocouples are also embedded into concrete beams for monitoring internal temperature fluctuations.

  15. Analysis and experiments for composite laminates with holes and subjected to 4-point bending

    NASA Technical Reports Server (NTRS)

    Shuart, M. J.; Prasad, C. B.

    1990-01-01

    Analytical and experimental results are presented for composite laminates with a hole and subjected to four-point bending. A finite-plate analysis is used to predict moment and strain distributions for six-layer quasi-isotropic laminates and transverse-ply laminates. Experimental data are compared with the analytical results. Experimental and analytical strain results show good agreement for the quasi-isotropic laminates. Failure of the two types of composite laminates is described, and failure strain results are presented as a function of normalized hole diameter. The failure results suggest that the initial failure mechanism for laminates subjected to four-point bending are similar to the initial failure mechanisms for corresponding laminates subjected to uniaxial inplane loadings.

  16. Enhanced superconductivity at the interface of W/Sr2RuO4 point contact

    NASA Astrophysics Data System (ADS)

    Wei, Jian; Wang, He; Lou, Weijian; Luo, Jiawei; Liu, Ying; Ortmann, J. E.; Mao, Z. Q.

    Differential resistance measurements are conducted for point contacts (PCs) between the Sr2RuO4 (SRO) single crystal and the tungsten tip. Since the tungsten tip is hard enough to penetrate through the surface layer, consistent superconducting features are observed. Firstly, with the tip pushed towards the crystal, the zero bias conductance peak (ZBCP) due to Andreev reflection at the normal-superconducting interface increases from 3% to more than 20%, much larger than previously reported, and extends to temperature higher than the bulk transition temperature. Reproducible ZBCP within 0.2 mV may also help determine the gap value of SRO, on which no consensus has been reached. Secondly, the logarithmic background can be fitted with the Altshuler-Aronov theory of electron-electron interaction for tunneling into quasi two dimensional electron system. Feasibility of such fitting confirms that spectroscopic information like density of states is probed, and electronic temperature retrieved from such fitting can be important to analyse the PC spectra. Third, at bias much higher than 0.2 mV there are conductance dips due to the critical current effect and these dips persist up to 6.2 K. For more details see. National Basic Research Program of China (973 Program) through Grant No. 2011CBA00106 and No. 2012CB927400.

  17. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  18. Tipping the scales.

    PubMed

    1998-12-01

    In the US, the October 1998 murder of a physician who performed abortions was an outward manifestation of the insidious battle against legal abortion being waged by radical Christian social conservatives seeking to transform the US democracy into a theocracy. This movement has been documented in a publication entitled, "Tipping the Scales: The Christian Right's Legal Crusade Against Choice" produced as a result of a 4-year investigation conducted by The Center for Reproductive Law and Policy. This publication describes how these fundamentalists have used sophisticated legal, lobbying, and communication strategies to further their goals of challenging the separation of church and state, opposing family planning and sexuality education that is not based solely on abstinence, promoting school prayer, and restricting homosexual rights. The movement has resulted in the introduction of more than 300 anti-abortion bills in states, 50 of which have passed in 23 states. Most Christian fundamentalist groups provide free legal representation to abortion clinic terrorists, and some groups solicit women to bring specious malpractice claims against providers. Sophisticated legal tactics are used by these groups to remove the taint of extremism and mask the danger posed to US constitutional principles being posed by "a well-financed and zealous brand of radical lawyers and their supporters." PMID:12294553

  19. Scaling of structural failure

    SciTech Connect

    Bazant, Z.P.; Chen, Er-Ping

    1997-01-01

    This article attempts to review the progress achieved in the understanding of scaling and size effect in the failure of structures. Particular emphasis is placed on quasibrittle materials for which the size effect is complicated. Attention is focused on three main types of size effects, namely the statistical size effect due to randomness of strength, the energy release size effect, and the possible size effect due to fractality of fracture or microcracks. Definitive conclusions on the applicability of these theories are drawn. Subsequently, the article discusses the application of the known size effect law for the measurement of material fracture properties, and the modeling of the size effect by the cohesive crack model, nonlocal finite element models and discrete element models. Extensions to compression failure and to the rate-dependent material behavior are also outlined. The damage constitutive law needed for describing a microcracked material in the fracture process zone is discussed. Various applications to quasibrittle materials, including concrete, sea ice, fiber composites, rocks and ceramics are presented.

  20. SPACE BASED INTERCEPTOR SCALING

    SciTech Connect

    G. CANAVAN

    2001-02-01

    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.

  1. Scaling and Urban Growth

    NASA Astrophysics Data System (ADS)

    Benguigui, L.; Czamanski, D.; Marinov, M.

    This paper presents an analysis of the growth of towns in the Tel Aviv metropolis. It indicates a similarity in the variation of populations so that the population functions can be scaled and superposed one onto the other. This is a strong indication that the growth mechanism for all these towns is the same. Two different models are presented to interpret the population growth: one is an analytic model while the other is a computer simulation. In the dynamic analytic model, we introduced the concept of characteristic time. The growth has two parts: in the first, the derivative is an increasing function, the town is very attractive and there is short delay between decision to build and complete realization of the process. At this time, there is no shortage of land. However, around a specific time, the delay begins to increase and there is lack of available land. The rate of the population variation decreases until saturation. The two models give a good quantitative description.

  2. Industrial scale gene synthesis.

    PubMed

    Notka, Frank; Liss, Michael; Wagner, Ralf

    2011-01-01

    The most recent developments in the area of deep DNA sequencing and downstream quantitative and functional analysis are rapidly adding a new dimension to understanding biochemical pathways and metabolic interdependencies. These increasing insights pave the way to designing new strategies that address public needs, including environmental applications and therapeutic inventions, or novel cell factories for sustainable and reconcilable energy or chemicals sources. Adding yet another level is building upon nonnaturally occurring networks and pathways. Recent developments in synthetic biology have created economic and reliable options for designing and synthesizing genes, operons, and eventually complete genomes. Meanwhile, high-throughput design and synthesis of extremely comprehensive DNA sequences have evolved into an enabling technology already indispensable in various life science sectors today. Here, we describe the industrial perspective of modern gene synthesis and its relationship with synthetic biology. Gene synthesis contributed significantly to the emergence of synthetic biology by not only providing the genetic material in high quality and quantity but also enabling its assembly, according to engineering design principles, in a standardized format. Synthetic biology on the other hand, added the need for assembling complex circuits and large complexes, thus fostering the development of appropriate methods and expanding the scope of applications. Synthetic biology has also stimulated interdisciplinary collaboration as well as integration of the broader public by addressing socioeconomic, philosophical, ethical, political, and legal opportunities and concerns. The demand-driven technological achievements of gene synthesis and the implemented processes are exemplified by an industrial setting of large-scale gene synthesis, describing production from order to delivery.

  3. Plague and Climate: Scales Matter

    PubMed Central

    Ben Ari, Tamara; Neerinckx, Simon; Gage, Kenneth L.; Kreppel, Katharina; Laudisoit, Anne; Leirs, Herwig; Stenseth, Nils Chr.

    2011-01-01

    Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts) of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations. PMID:21949648

  4. Psychometric properties of the Scale for Quality Evaluation of the Bachelor Degree in Nursing Version 2 (QBN 2).

    PubMed

    Macale, Loreana; Scialò, Gennaro; Di Sarra, Luca; De Marinis, Maria Grazia; Rocco, Gennaro; Vellone, Ercole; Alvaro, Rosaria

    2014-03-01

    To evaluate all the variables that affect nursing education is important for nursing educators to have valid and reliable instruments that can measure the perceived quality of the Bachelor Degree in Nursing. This study testing the Scale for Quality Evaluation of the Bachelor Degree in Nursing instrument and its psychometric properties with a descriptive design. Participant were first, second and third year students of the Bachelor Degree in Nursing Science from three Italian universities. The Scale for Quality Evaluation of Bachelor Degree in Nursing consists of 65 items that use a 4 point Likert scale ranging from "strongly disagree" to "strongly agree". The instrument comes from a prior version with 41 items that were modified and integrated with 24 items to improve reliability. Six hundred and fifty questionnaires were completed and considered for the present study. The mean age of the students was 24.63 years, 65.5% were females. Reliability of the scale resulted in a very high Cronbach's alpha (0.96). The construct validity was tested with factor analysis that showed 7 factors. The Scale for Quality Evaluation of the Bachelor Degree in Nursing, although requiring further studies, represents a useful instrument to measure the quality of the Bachelor Nursing Degree. PMID:23810577

  5. Mechanically reliable scales and coatings

    SciTech Connect

    Tortorelli, P.F.; Alexander, K.B.

    1995-07-01

    As the first stage in examining the mechanical reliability of protective surface oxides, the behavior of alumina scales formed on iron-aluminum alloys during high-temperature cyclic oxidation was characterized in terms of damage and spallation tendencies. Scales were thermally grown on specimens of three iron-aluminum composition using a series of exposures to air at 1000{degrees}C. Gravimetric data and microscopy revealed substantially better integrity and adhesion of the scales grown on an alloy containing zirconium. The use of polished (rather than just ground) specimens resulted in scales that were more suitable for subsequent characterization of mechanical reliability.

  6. DARHT Radiographic Grid Scale Correction

    SciTech Connect

    Warthen, Barry J.

    2015-02-13

    Recently it became apparent that the radiographic grid which has been used to calibrate the dimensional scale of DARHT radiographs was not centered at the location where the objects have been centered. This offset produced an error of 0.188% in the dimensional scaling of the radiographic images processed using the assumption that the grid and objects had the same center. This paper will show the derivation of the scaling correction, explain how new radiographs are being processed to account for the difference in location, and provide the details of how to correct radiographic image processed with the erroneous scale factor.

  7. Limitations in thermal scale modeling

    NASA Technical Reports Server (NTRS)

    Macgregor, R. K.

    1971-01-01

    Thermal scale modeling limitations for radiation- conduction system of unmanned spacecraft, discussing material thermal properties, model dimensions, instrumentation effects and environment simulation

  8. Mixed scale joint graphical lasso.

    PubMed

    Pircalabelu, Eugen; Claeskens, Gerda; Waldorp, Lourens J

    2016-10-01

    SummaryWe have developed a method for estimating brain networks from fMRI datasets that have not all been measured using the same set of brain regions. Some of the coarse scale regions have been split in smaller subregions. The proposed penalized estimation procedure selects undirected graphical models with similar structures that combine information from several subjects and several coarseness scales. Both within-scale edges and between-scale edges that identify possible connections between a large region and its subregions are estimated. PMID:27324414

  9. Validating Large Scale Networks Using Temporary Local Scale Networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  10. INTERIOR VIEW SHOWING BATCH SCALES. SERIES OF FIVE SCALES WITH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR VIEW SHOWING BATCH SCALES. SERIES OF FIVE SCALES WITH SIX DIFFERENT MATERIALS. MIX SIFTED DOWN FROM SILOS ABOVE. INGREDIENTS: SAND, SODA ASH, DOLOMITE LIMESTONE, NEPHELINE SYENITE, SALT CAKE. - Chambers-McKee Window Glass Company, Batch Plant, Clay Avenue Extension, Jeannette, Westmoreland County, PA

  11. Drift Scale THM Model

    SciTech Connect

    J. Rutqvist

    2004-10-07

    This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because a sufficient amount of water must be available within a

  12. Nonrelativistic scale anomaly, and composite operators with complex scaling dimensions

    SciTech Connect

    Moroz, Sergej

    2011-05-15

    Research Highlights: > Nonrelativistic scale anomaly leads to operators with complex scaling dimensions. > We study an operator O={psi}{psi} in quantum mechanics with 1/r{sup 2} potenial. > The propagator of the composite operator is analytically computed. - Abstract: It is demonstrated that a nonrelativistic quantum scale anomaly manifests itself in the appearance of composite operators with complex scaling dimensions. In particular, we study nonrelativistic quantum mechanics with an inverse square potential and consider a composite s-wave operator O={psi}{psi}. We analytically compute the scaling dimension of this operator and determine the propagator <0|TOO{sup +}|0>. The operator O represents an infinite tower of bound states with a geometric energy spectrum. Operators with higher angular momenta are briefly discussed.

  13. Exploratory and confirmatory factory analysis of the Willingness to Eat Whole Grains Questionnaire: A measure of young adults' attitudes toward consuming whole grain foods.

    PubMed

    Tuuri, Georgianna; Cater, Melissa; Craft, Brittany; Bailey, Ariana; Miketinas, Derek

    2016-10-01

    Whole grains are recommended by dietary guidelines because of their health-promoting properties, yet attitudes toward consuming these foods have not been examined. This study developed and validated a questionnaire to estimate willingness to consume whole grain foods. Focus group interviews with high school students and input from nutrition educators produced a list of 10 whole grain items that were included in the "Willingness to Eat Whole Grains Questionnaire". Young adult university students 18-29 years of age indicated their willingness to consume each of the whole grain foods using a 4-point, Likert-type scale with responses ranging from "always unwilling" to "always willing" and a fifth option of "never eaten". Participants' age, race/ethnicity, and gender were collected. Data were examined using exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and test-retest reliability. The EFA test (n = 266; 65% female; 69% white) using principal axis factoring returned a single factor that included all survey items and explained 58.3% of the variance. The CFA (n = 252; 62% female, 74% white) supported a single-factor solution: χ(2) = 80.57 (35); RMSEA = 0.07; Comparative Fit Index = 0.92; Tucker-Lewis Index = 0.90; and SRMR = 0.05. The questionnaire, administered on two occasions separated by two weeks to 36 university students, demonstrated good testretest reliability (r = 0.87, p < 0.0001). The "Willingness to Eat Whole Grains Questionnaire" had good face validity when used with a young adult population and will be a useful tool to help nutrition educators examine attitudes toward consuming nutrient-rich whole grain foods.

  14. Drinking behaviors by stress level in Korean university students.

    PubMed

    Chung, Hye-Kyung; Lee, Hae-Young

    2012-04-01

    The purposes of this study are to estimate the stress level of university students, and to verify the relationships between stress level and drinking behavior. A questionnaire survey was administered to 430 university students in the Gangwon area in Korea from November 5 to November 28, 2008, and data from 391 students were used for the final statistical analysis. The most stressful factor was "Worry about academic achievements" (2.86 by Likert-type 4 point scale). The subjects were divided into two groups, a low stress group (≤ 65.0) and a high stress group (≥ 66.0), by the mean value (65.1) and median value (66.0) of the stress levels. The drinking frequency was not different between the two stress groups, but the amount of alcohol consumption was significantly different (P < 0.05). The portion of students reporting drinking "7 glasses or over" was higher in the lower stress group than in the higher stress group. In addition, factor 6, "Lack of learning ability", was negatively correlated with drinking frequency and the amount of alcohol consumption (P < 0.05), and factor 3, "Worry about academic achievements", was negatively correlated with the amount of drinking (P < 0.05). The major motive for drinking was "When overjoyed or there is something to celebrate" (2.62), and the main expected effect of drinking was "Drinking enables me to get together with people and shape my sociability" (2.73). The higher stress group showed significantly higher scores on several items in the categories of motives (P < 0.01), negative experience (P < 0.05), and expected effects (P < 0.05) of drinking than the lower stress group. Our results imply that university students at the lower stress level may drink more from social motives in positive drinking environments, while those at the higher stress level may have more problematic-drinking despite their smaller amount of alcohol consumption.

  15. Scale invariance vs conformal invariance

    NASA Astrophysics Data System (ADS)

    Nakayama, Yu

    2015-03-01

    In this review article, we discuss the distinction and possible equivalence between scale invariance and conformal invariance in relativistic quantum field theories. Under some technical assumptions, we can prove that scale invariant quantum field theories in d = 2 space-time dimensions necessarily possess the enhanced conformal symmetry. The use of the conformal symmetry is well appreciated in the literature, but the fact that all the scale invariant phenomena in d = 2 space-time dimensions enjoy the conformal property relies on the deep structure of the renormalization group. The outstanding question is whether this feature is specific to d = 2 space-time dimensions or it holds in higher dimensions, too. As of January 2014, our consensus is that there is no known example of scale invariant but non-conformal field theories in d = 4 space-time dimensions under the assumptions of (1) unitarity, (2) Poincaré invariance (causality), (3) discrete spectrum in scaling dimensions, (4) existence of scale current and (5) unbroken scale invariance in the vacuum. We have a perturbative proof of the enhancement of conformal invariance from scale invariance based on the higher dimensional analogue of Zamolodchikov's c-theorem, but the non-perturbative proof is yet to come. As a reference we have tried to collect as many interesting examples of scale invariance in relativistic quantum field theories as possible in this article. We give a complementary holographic argument based on the energy-condition of the gravitational system and the space-time diffeomorphism in order to support the claim of the symmetry enhancement. We believe that the possible enhancement of conformal invariance from scale invariance reveals the sublime nature of the renormalization group and space-time with holography. This review is based on a lecture note on scale invariance vs conformal invariance, on which the author gave lectures at Taiwan Central University for the 5th Taiwan School on Strings and

  16. The Callier-Azusa Scale.

    ERIC Educational Resources Information Center

    Stillman, Robert D., Ed.

    Presented is the Callier-Azusa Scale designed to aid in the assessment of deaf-blind and multihandicapped children in the areas of motor development, perceptual abilities, daily living skills, language development, and socialization. The scale is said to be predicated on the assumption that given the appropriate environment all children follow the…

  17. A Scale of Mobbing Impacts

    ERIC Educational Resources Information Center

    Yaman, Erkan

    2012-01-01

    The aim of this research was to develop the Mobbing Impacts Scale and to examine its validity and reliability analyses. The sample of study consisted of 509 teachers from Sakarya. In this study construct validity, internal consistency, test-retest reliabilities and item analysis of the scale were examined. As a result of factor analysis for…

  18. Rating scales for musician's dystonia

    PubMed Central

    Berque, Patrice; Jabusch, Hans-Christian; Altenmüller, Eckart; Frucht, Steven J.

    2013-01-01

    Musician's dystonia (MD) is a focal adult-onset dystonia most commonly involving the hand. It has much greater relative prevalence than non-musician’s focal hand dystonias, exhibits task specificity at the level of specific musical passages, and is a particularly difficult form of dystonia to treat. For most MD patients, the diagnosis confirms the end of their music performance careers. Research on treatments and pathophysiology is contingent upon measures of motor function abnormalities. In this review, we comprehensively survey the literature to identify the rating scales used in MD and the distribution of their use. We also summarize the extent to which the scales have been evaluated for their clinical utility, including reliability, validity, sensitivity, specificity to MD, and practicality for a clinical setting. Out of 135 publications, almost half (62) included no quantitative measures of motor function. The remaining 73 studies used a variety of choices from among 10 major rating scales. Most used subjective scales involving either patient or clinician ratings. Only 25% (18) of the studies used objective scales. None of the scales has been completely and rigorously evaluated for clinical utility. Whether studies involved treatments or pathophysiologic assays, there was a heterogeneous choice of rating scales used with no clear standard. As a result, the collective interpretive value of those studies is limited because the results are confounded by measurement effects. We suggest that the development and widespread adoption of a new clinically useful rating scale is critical for accelerating basic and clinical research in MD. PMID:23884039

  19. Voice, Schooling, Inequality, and Scale

    ERIC Educational Resources Information Center

    Collins, James

    2013-01-01

    The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…

  20. Contrast Analysis for Scale Differences.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; And Others

    Research on tests for scale equality have focused exclusively on an overall test statistic and have not examined procedures for identifying specific differences in multiple group designs. The present study compares four contrast analysis procedures for scale differences in the single factor four-group design: (1) Tukey HSD; (2) Kramer-Tukey; (3)…

  1. The Differentiated Classroom Observation Scale

    ERIC Educational Resources Information Center

    Cassady, Jerrell C.; Neumeister, Kristie L. Speirs; Adams, Cheryll M.; Cross, Tracy L.; Dixon, Felicia A.; Pierce, Rebecca L.

    2004-01-01

    This article presents a new classroom observation scale that was developed to examine the differential learning activities and experiences of gifted children educated in regular classroom settings. The Differentiated Classroom Observation Scale (DCOS) is presented in total, with clarification of the coding practices and strategies. Although the…

  2. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  3. Scale Shrinkage in Vertical Equating.

    ERIC Educational Resources Information Center

    Camilli, Gregory; And Others

    1993-01-01

    Three potential causes of scale shrinkage (measurement error, restriction of range, and multidimensionality) in item response theory vertical equating are discussed, and a more comprehensive model-based approach to establishing vertical scales is described. Test data from the National Assessment of Educational Progress are used to illustrate the…

  4. Multi-scale Material Appearance

    NASA Astrophysics Data System (ADS)

    Wu, Hongzhi

    Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.

  5. Evaluation of Behavioral Expectation Scales.

    ERIC Educational Resources Information Center

    Zedeck, Sheldon; Baker, Henry T.

    Behavioral Expectation Scales developed by Smith and Kendall were evaluated. Results indicated slight interrater reliability between Head Nurses and Supervisors, moderate dependence among five performance dimensions, and correlation between two scales and tenure. Results are discussed in terms of procedural problems, critical incident problems,…

  6. OCCUPATIONAL ASPIRATION SCALE FOR FEMALES.

    ERIC Educational Resources Information Center

    JEFFS, GEORGE A.

    OCCUPATIONAL TITLES USABLE IN ASSESSING OCCUPATIONAL GOALS OFSENIOR HIGH SCHOOL FEMALES WERE SELECTED AS THE FIRST STEP IN ESTABLISHING AN OCCUPATIONAL ASPIRATION SCALE FOR FEMALES. A LIST OF 117 OCCUPATIONAL TITLES, COMPILED FROM THREE PREVIOUS STUDIES AND "THE DICTIONARY OF OCCUPATIONAL TITLES," WAS RATED ON A SIX-LEVEL SCALE AS TO ITS GENERAL…

  7. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  8. Spiritual Competency Scale: Further Analysis

    ERIC Educational Resources Information Center

    Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.

    2015-01-01

    This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC…

  9. Convergent Validity of Four Innovativeness Scales.

    ERIC Educational Resources Information Center

    Goldsmith, Ronald E.

    1986-01-01

    Four scales of innovativeness were administered to two samples of undergraduate students: the Open Processing Scale, Innovativeness Scale, innovation subscale of the Jackson Personality Inventory, and Kirton Adaption-Innovation Inventory. Intercorrelations indicated the scales generally exhibited convergent validity. (GDC)

  10. Important Scaling Parameters for Testing Model-Scale Helicopter Rotors

    NASA Technical Reports Server (NTRS)

    Singleton, Jeffrey D.; Yeager, William T., Jr.

    1998-01-01

    An investigation into the effects of aerodynamic and aeroelastic scaling parameters on model scale helicopter rotors has been conducted in the NASA Langley Transonic Dynamics Tunnel. The effect of varying Reynolds number, blade Lock number, and structural elasticity on rotor performance has been studied and the performance results are discussed herein for two different rotor blade sets at two rotor advance ratios. One set of rotor blades were rigid and the other set of blades were dynamically scaled to be representative of a main rotor design for a utility class helicopter. The investigation was con-densities permits the acquisition of data for several Reynolds and Lock number combinations.

  11. Scale effect on overland flow connectivity at the plot scale

    NASA Astrophysics Data System (ADS)

    Peñuela, A.; Javaux, M.; Bielders, C. L.

    2013-01-01

    A major challenge in present-day hydrological sciences is to enhance the performance of existing distributed hydrological models through a better description of subgrid processes, in particular the subgrid connectivity of flow paths. The Relative Surface Connection (RSC) function was proposed by Antoine et al. (2009) as a functional indicator of runoff flow connectivity. For a given area, it expresses the percentage of the surface connected to the outflow boundary (C) as a function of the degree of filling of the depression storage. This function explicitly integrates the flow network at the soil surface and hence provides essential information regarding the flow paths' connectivity. It has been shown that this function could help improve the modeling of the hydrograph at the square meter scale, yet it is unknown how the scale affects the RSC function, and whether and how it can be extrapolated to other scales. The main objective of this research is to study the scale effect on overland flow connectivity (RSC function). For this purpose, digital elevation data of a real field (9 × 3 m) and three synthetic fields (6 × 6 m) with contrasting hydrological responses were used, and the RSC function was calculated at different scales by changing the length (l) or width (w) of the field. To different extents depending on the microtopography, border effects were observed for the smaller scales when decreasing l or w, which resulted in a strong decrease or increase of the maximum depression storage, respectively. There was no scale effect on the RSC function when changing w, but a remarkable scale effect was observed in the RSC function when changing l. In general, for a given degree of filling of the depression storage, C decreased as l increased, the change in C being inversely proportional to the change in l. However, this observation applied only up to approx. 50-70% (depending on the hydrological response of the field) of filling of depression storage, after which no

  12. Scale effect on overland flow connectivity, at the interill scale

    NASA Astrophysics Data System (ADS)

    Penuela Fernandez, A.; Bielders, C.; Javaux, M.

    2012-04-01

    The relative surface connection function (RSC) was proposed by Antoine et al. (2009) as a functional indicator of runoff flow connectivity. For a given area, it expresses the percentage of the surface connected to the outlet (C) as a function of the degree of filling of the depression storage. This function explicitly integrates the flow network at the soil surface and hence provides essential information regarding the flow paths' connectivity. It has been shown that this function could help improve the modeling of the hydrogram at the square meter scale, yet it is unknown how the scale affects the RSC function, and whether and how it can be extrapolated to other scales. The main objective of this research is to study the scale effect on overland flow connectivity (RSC function). For this purpose, digital elevation data of a real field (9 x 3 m) and three synthetic fields (6 x 6 m) with contrasting hydrological responses was used, and the RSC function was calculated at different scales by changing the length (L) or width (l) of the field. Border effects were observed for the smaller scales. In most of cases, for L or l smaller than 750mm, increasing L or l, resulted in a strong increase or decrease of the maximum depression storage, respectively. There was no scale effect on the RSC function when changing l. On the contrary, a remarkable scale effect was observed in the RSC function when changing L. In general, for a given degree of filling of the depression storage, C decreased as L increased. This change in C was inversely proportional to the change in L. This observation applied only up to approx. 50-70% (depending on the hydrological response of the field) of filling of depression storage, after which no correlation was found between C and L. The results of this study help identify the critical scale to study overland flow connectivity. At scales larger than the critical scale, the RSC function showed a great potential to be extrapolated to other scales.

  13. Scale effect on overland flow connectivity at the plot scale

    NASA Astrophysics Data System (ADS)

    Peñuela, A.; Javaux, M.; Bielders, C. L.

    2012-06-01

    A major challenge in present-day hydrological sciences is to enhance the performance of existing distributed hydrological models through a better description of subgrid processes, in particular the subgrid connectivity of flow paths. The relative surface connection function (RSC) was proposed by Antoine et al. (2009) as a functional indicator of runoff flow connectivity. For a given area, it expresses the percentage of the surface connected to the outflow boundary (C) as a function of the degree of filling of the depression storage. This function explicitly integrates the flow network at the soil surface and hence provides essential information regarding the flow paths' connectivity. It has been shown that this function could help improve the modeling of the hydrogram at the square meter scale, yet it is unknown how the scale affects the RSC function, and whether and how it can be extrapolated to other scales. The main objective of this research is to study the scale effect on overland flow connectivity (RSC function). For this purpose, digital elevation data of a real field (9 × 3 m) and three synthetic fields (6 × 6 m) with contrasting hydrological responses were used, and the RSC function was calculated at different scales by changing the length (l) or width (w) of the field. Border effects, at different extents depending on the microtopography, were observed for the smaller scales, when decreasing l or w, which resulted in a strong decrease or increase of the maximum depression storage, respectively. There was no scale effect on the RSC function when changing w. On the contrary, a remarkable scale effect was observed in the RSC function when changing l. In general, for a given degree of filling of the depression storage, C decreased as l increased. This change in C was inversely proportional to the change in l. This observation applied only up to approx. 50-70% (depending on the hydrological response of the field) of filling of depression storage, after which

  14. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  15. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  16. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  17. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  18. Scale-dependent halo bias from scale-dependent growth

    SciTech Connect

    Parfrey, Kyle; Hui, Lam; Sheth, Ravi K.

    2011-03-15

    We derive a general expression for the large-scale halo bias, in theories with a scale-dependent linear growth, using the excursion set formalism. Such theories include modified-gravity models, and models in which the dark energy clustering is non-negligible. A scale dependence is imprinted in both the formation and evolved biases by the scale-dependent growth. Mergers are accounted for in our derivation, which thus extends earlier work which focused on passive evolution. There is a simple analytic form for the bias for those theories in which the nonlinear collapse of perturbations is approximately the same as in general relativity. As an illustration, we apply our results to a simple Yukawa modification of gravity, and use Sloan Digital Sky Survey measurements of the clustering of luminous red galaxies to constrain the theory's parameters.

  19. Concordance among anticholinergic burden scales

    PubMed Central

    Naples, Jennifer G.; Marcum, Zachary A.; Perera, Subashan; Gray, Shelly L.; Newman, Anne B.; Simonsick, Eleanor M.; Yaffe, Kristine; Shorr, Ronald I.; Hanlon, Joseph T.

    2015-01-01

    Background There is no gold standard to assess potential anticholinergic burden of medications. Objectives To evaluate concordance among five commonly used anticholinergic scales. Design Cross-sectional secondary analysis. Setting Pittsburgh, PA, and Memphis, TN. Participants 3,055 community-dwelling older adults aged 70–79 with baseline medication data from the Health, Aging, and Body Composition study. Measurements Any use, weighted scores, and total standardized daily dosage were calculated using five anticholinergic measures (i.e., Anticholinergic Cognitive Burden [ACB] Scale, Anticholinergic Drug Scale [ADS], Anticholinergic Risk Scale [ARS], Drug Burden Index anticholinergic component [DBI-ACh], and Summated Anticholinergic Medications Scale [SAMS]). Concordance was evaluated with kappa statistics and Spearman rank correlations. Results Any anticholinergic use in rank order was 51% for the ACB, 43% for the ADS, 29% for the DBI-ACh, 23% for the ARS, and 16% for the SAMS. Kappa statistics for all pairwise use comparisons ranged from 0.33 to 0.68. Similarly, concordance as measured by weighted kappa statistics ranged from 0.54 to 0.70 among the three scales not incorporating dosage (ADS, ARS, and ACB). Spearman rank correlation between the DBI-ACh and SAMS was 0.50. Conclusions Only low to moderate concordance was found among the five anticholinergic scales. Future research is needed to examine how these differences in measurement impact their predictive validity with respect to clinically relevant outcomes, such as cognitive impairment. PMID:26480974

  20. Estimating Plot Scale Impacts on Watershed Scale Management

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Fleckenstein, J. H.; Tenhunen, J. D.; Peiffer, S.; Huwe, B.

    2010-12-01

    Over recent decades, land and resource use as well as climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, agricultural and forest products). The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, biology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a catchment of South Korea. A variety of models (Erosion-3D, HBV-Light, VS2DH, Hydrus, PIXGRO, DNDC, and Hydrogeosphere) are being used to simulate plot and field scale measurements within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. The experimental field data throughout the catchment was integrated with the spatially-distributed SWAT2005 model. Typically, macroscopic homogeneity and average effective model parameters are assumed when upscaling local-scale heterogeneous measurements to the watershed. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources.

  1. SCALING PROPERTIES OF SMALL-SCALE FLUCTUATIONS IN MAGNETOHYDRODYNAMIC TURBULENCE

    SciTech Connect

    Perez, Jean Carlos; Mason, Joanne; Boldyrev, Stanislav; Cattaneo, Fausto E-mail: j.mason@exeter.ac.uk E-mail: cattaneo@flash.uchicago.edu

    2014-09-20

    Magnetohydrodynamic (MHD) turbulence in the majority of natural systems, including the interstellar medium, the solar corona, and the solar wind, has Reynolds numbers far exceeding the Reynolds numbers achievable in numerical experiments. Much attention is therefore drawn to the universal scaling properties of small-scale fluctuations, which can be reliably measured in the simulations and then extrapolated to astrophysical scales. However, in contrast with hydrodynamic turbulence, where the universal structure of the inertial and dissipation intervals is described by the Kolmogorov self-similarity, the scaling for MHD turbulence cannot be established based solely on dimensional arguments due to the presence of an intrinsic velocity scale—the Alfvén velocity. In this Letter, we demonstrate that the Kolmogorov first self-similarity hypothesis cannot be formulated for MHD turbulence in the same way it is formulated for the hydrodynamic case. Besides profound consequences for the analytical consideration, this also imposes stringent conditions on numerical studies of MHD turbulence. In contrast with the hydrodynamic case, the discretization scale in numerical simulations of MHD turbulence should decrease faster than the dissipation scale, in order for the simulations to remain resolved as the Reynolds number increases.

  2. High performance oilfield scale inhibitors

    SciTech Connect

    Duccini, Y.; Dufour, A.; Hann, W.M.; Sanders, T.W.; Weinstein, B.

    1997-08-01

    Sea water often reacts with the formation water in offshore fields to produce barium, calcium and strontium sulfate deposits that hinder oil production. Newer fields often have more difficult to control scale problems than older ones, and current technology scale inhibitors are not able to control the deposits as well as needed. In addition, ever more stringent regulations designed to minimize the impact of inhibitors on the environment are being enacted. Three new inhibitors are presented that overcome many of the problems of older technology scale inhibitors.

  3. Semi-scaling cosmic strings

    SciTech Connect

    Vanchurin, Vitaly

    2010-11-01

    We develop a model of string dynamics with back-reaction from both scaling and non-scaling loops taken into account. The evolution of a string network is described by the distribution functions of coherence segments and kinks. We derive two non-linear equations which govern the evolution of the two distributions and solve them analytically in the limit of late times. We also show that the correlation function is an exponential, and solve the dynamics for the corresponding spectrum of scaling loops.

  4. Scaling of graphene integrated circuits.

    PubMed

    Bianchi, Massimiliano; Guerriero, Erica; Fiocco, Marco; Alberti, Ruggero; Polloni, Laura; Behnam, Ashkan; Carrion, Enrique A; Pop, Eric; Sordan, Roman

    2015-05-01

    The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing. PMID:25873359

  5. Constructing cities, deconstructing scaling laws

    PubMed Central

    Arcaute, Elsa; Hatna, Erez; Ferguson, Peter; Youn, Hyejin; Johansson, Anders; Batty, Michael

    2015-01-01

    Cities can be characterized and modelled through different urban measures. Consistency within these observables is crucial in order to advance towards a science of cities. Bettencourt et al. have proposed that many of these urban measures can be predicted through universal scaling laws. We develop a framework to consistently define cities, using commuting to work and population density thresholds, and construct thousands of realizations of systems of cities with different boundaries for England and Wales. These serve as a laboratory for the scaling analysis of a large set of urban indicators. The analysis shows that population size alone does not provide us enough information to describe or predict the state of a city as previously proposed, indicating that the expected scaling laws are not corroborated. We found that most urban indicators scale linearly with city size, regardless of the definition of the urban boundaries. However, when nonlinear correlations are present, the exponent fluctuates considerably. PMID:25411405

  6. Scale locality of magnetohydrodynamic turbulence.

    PubMed

    Aluie, Hussein; Eyink, Gregory L

    2010-02-26

    We investigate the scale locality of cascades of conserved invariants at high kinetic and magnetic Reynold's numbers in the "inertial-inductive range" of magnetohydrodynamic (MHD) turbulence, where velocity and magnetic field increments exhibit suitable power-law scaling. We prove that fluxes of total energy and cross helicity-or, equivalently, fluxes of Elsässer energies-are dominated by the contributions of local triads. Flux of magnetic helicity may be dominated by nonlocal triads. The magnetic stretching term may also be dominated by nonlocal triads, but we prove that it can convert energy only between velocity and magnetic modes at comparable scales. We explain the disagreement with numerical studies that have claimed conversion nonlocally between disparate scales. We present supporting data from a 1024{3} simulation of forced MHD turbulence.

  7. Trends in Analytical Scale Separations.

    ERIC Educational Resources Information Center

    Jorgenson, James W.

    1984-01-01

    Discusses recent developments in the instrumentation and practice of analytical scale operations. Emphasizes detection devices and procedures in gas chromatography, liquid chromatography, electrophoresis, supercritical fluid chromatography, and field-flow fractionation. (JN)

  8. Pilot Scale Advanced Fogging Demonstration

    SciTech Connect

    Demmer, Rick L.; Fox, Don T.; Archiblad, Kip E.

    2015-01-01

    Experiments in 2006 developed a useful fog solution using three different chemical constituents. Optimization of the fog recipe and use of commercially available equipment were identified as needs that had not been addressed. During 2012 development work it was noted that low concentrations of the components hampered coverage and drying in the United Kingdom’s National Nuclear Laboratory’s testing much more so than was evident in the 2006 tests. In fiscal year 2014 the Idaho National Laboratory undertook a systematic optimization of the fogging formulation and conducted a non-radioactive, pilot scale demonstration using commercially available fogging equipment. While not as sophisticated as the equipment used in earlier testing, the new approach is much less expensive and readily available for smaller scale operations. Pilot scale testing was important to validate new equipment of an appropriate scale, optimize the chemistry of the fogging solution, and to realize the conceptual approach.

  9. Inflation in the scaling limit

    SciTech Connect

    Matarrese, S.; Ortolan, A.; Lucchin, F.

    1989-07-15

    We investigate the stochastic dynamics of the/ital inflaton/ for a wide class of potentials leading either tochaotic or to power-law inflation.At late times the system enters a /ital scaling/ /ital regime/where macroscopic order sets in: the field distribution sharply peaksaround the classical slow-rollover configuration and curvature perturbationsoriginate with a non-Gaussian scale-invariant statistics.

  10. Distributional Scaling in Heterogeneous Aquifers

    NASA Astrophysics Data System (ADS)

    Polsinelli, J. F.

    2015-12-01

    An investigation is undertaken into the fractal scaling properties of the piezometric head in a heterogeneous unconfined aquifer. The governing equations for the unconfined flow are derived from conservation of mass and the Darcy law. The Dupuit approximation will be used to model the dynamics. The spatially varying nature of the tendency to conduct flow (e.g. the hydraulic conductivity) is represented as a stochastic process. Experimental studies in the literature have indicated that the conductivity belongs to a class of non-stationary stochastic fields, called H-ss fields. The uncertainty in the soil parameters is imparted onto the flow variables; in groundwater investigations the potentiometric head will be a random function. The structure of the head field will be analyzed with an emphasis on the scaling properties. The scaling scheme for the modeling equations and the simulation procedure for the saturated hydraulic conductivity process will be explained, then the method will be validated through numerical experimentation using the USGS Modflow-2005 software. The results of the numerical simulations demonstrate that the head will exhibit multi-fractal scaling if the hydraulic conductivity exhibits multi-fractal scaling and the differential equations for the groundwater equation satisfy a particular set of scale invariance conditions.

  11. Scale-free primordial cosmology

    NASA Astrophysics Data System (ADS)

    Ijjas, Anna; Steinhardt, Paul J.; Loeb, Abraham

    2014-01-01

    The large-scale structure of the Universe suggests that the physics underlying its early evolution is scale-free. This was the historic motivation for the Harrison-Zel'dovich-Peebles spectrum and for inflation. Based on a hydrodynamical approach, we identify scale-free forms for the background equation of state for both inflationary and cyclic scenarios and use these forms to derive predictions for the spectral tilt and tensor-to-scalar ratio of primordial density perturbations. For the case of inflation, we find three classes of scale-free models with distinct predictions. Including all classes, we show that scale-free inflation predicts tensor-to-scalar ratio r >10-4. We show that the observationally favored class is theoretically disfavored because it suffers from an initial conditions problem and the hydrodynamical form of an unlikeliness problem similar to that identified recently for certain inflaton potentials. We contrast these results with those for scale-free cyclic models.

  12. Color constancy and hue scaling.

    PubMed

    Schultz, Sven; Doerschner, Katja; Maloney, Laurence T

    2006-01-01

    In this study, we used a hue scaling technique to examine human color constancy performance in simulated three-dimensional scenes. These scenes contained objects of various shapes and materials and a matte test patch at the center of the scene. Hue scaling settings were made for test patches under five different illuminations. Results show that subjects had nearly stable hue scalings for a given test surface across different illuminants. In a control experiment, only the test surfaces that belonged to one illumination condition were presented, blocked in front of a black background. Surprisingly, the hue scalings of the subjects in the blocked control experiment were not simply determined by the color codes of the test surface. Rather, they depended on the sequence of previously presented test stimuli. In contrast, subjects' hue scalings in a second control experiment (with order of presentations randomized) were completely determined by the color codes of the test surface. Our results show that hue scaling is a useful technique to investigate color constancy in a more phenomenological sense. Furthermore, the results from the blocked control experiment underline the important role of slow chromatic adaptation for color constancy.

  13. Scale-Dependent Dispersivity Explained Without Scale-Dependent Heterogeneity

    NASA Astrophysics Data System (ADS)

    Dhaliwal, P.; Engdahl, N. B.; Fogg, G. E.

    2011-12-01

    The observed scale-dependence of dispersivity has often been attributed to the scale-dependence of porous media heterogeneity. However, mass transfer between areas of high and low hydraulic conductivity and preferential solute migration may provide an alternative explanation for this phenomenon. To illustrate this point, we used geostatistical models representing the heterogeneity and interconnectedness of a typical aquifer system and plume modeling via a highly accurate random walk particle tracking method. The apparent dispersivity values were calculated using the statistical moments of the plumes. Apparent dispersivity was seen to grow from 0.01(m)to 100(m) over length scales of 0.06(m) to 500(m) even though heterogeneity scales and facies proportions were stationary and invariant with scale in the simulations. The results suggest that the increase in dispersivity was due solely to a stretching of the plume by two mechanisms. The first mechanism results from the diffusion of solute into areas of low conductivity and the second comes from the movement of solute through well-connected high K zone channels. Under such conditions, an "asymptotic dispersivity" may never be reached.

  14. Scaling of extreme rainfall areas at a planetary scale.

    PubMed

    Devineni, Naresh; Lall, Upmanu; Xi, Chen; Ward, Philip

    2015-07-01

    Event magnitude and area scaling relationships for rainfall over different regions of the world have been presented in the literature for relatively short durations and over relatively small areas. In this paper, we present the first ever results on a global analysis of the scaling characteristics of extreme rainfall areas for durations ranging from 1 to 30 days. Broken power law models are fit in each case. The past work has been focused largely on the time and space scales associated with local and regional convection. The work presented here suggests that power law scaling may also apply to planetary scale phenomenon, such as frontal and monsoonal systems, and their interaction with local moisture recycling. Such features may have persistence over large areas corresponding to extreme rain and regional flood events. As a result, they lead to considerable hazard exposure. A caveat is that methods used for empirical power law identification have difficulties with edge effects due to finite domains. This leads to problems with robust model identification and interpretability of the underlying relationships. We use recent algorithms that aim to address some of these issues in a principled way. Theoretical research that could explain why such results may emerge across the world, as analyzed for the first time in this paper, is needed. PMID:26232980

  15. SETI and astrobiology: The Rio Scale and the London Scale

    NASA Astrophysics Data System (ADS)

    Almár, Iván

    2011-11-01

    The public reaction to a discovery, the character of the corresponding risk communication, as well as the possible impact on science and society all depend on the character of the phenomenon discovered, on the method of discovery, on the distance to the phenomenon and, last but not least, on the reliability of the announcement itself. The Rio Scale - proposed together with Jill Tarter just a decade ago at an IAA symposium in Rio de Janeiro - attempts to quantify the relative importance of such a “low probability, high consequence event”, namely the announcement of an ETI discovery. After the publication of the book “The Eerie Silence” by Paul Davies it is necessary to control how the recently suggested possible “technosignatures” or “technomarkers” mentioned in this book could be evaluated by the Rio Scale. The new London Scale, proposed at the Royal Society meeting in January 2010, in London, is a similar attempt to quantify the impact of an announcement regarding the discovery of ET life on an analogous ordinal scale between zero and ten. Here again the new concept of a “shadow biosphere” raised in this book deserves a special attention since a “weird form of life” found on Earth would not necessarily have an extraterrestrial origin, nevertheless it might be an important discovery in itself. Several arguments are presented that methods, aims and targets of “search for ET life” and “search for ET intelligence” are recently converging. The new problem is raised whether a unification of these two scales is necessary as a consequence of the convergence of the two subjects. Finally, it is suggested that experts in social sciences should take the structure of the respective scales into consideration when investigating case by case the possible effects on the society of such discoveries.

  16. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-05-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  17. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  18. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  19. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  20. Hidden scale invariance of metals

    NASA Astrophysics Data System (ADS)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.; Pedersen, Ulf R.

    2015-11-01

    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general "hidden" scale invariance of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant inverse power-law (IPL) pair interactions. However, crystal packings of several transition metals (V, Cr, Mn, Fe, Nb, Mo, Ta, W, and Hg), most post-transition metals (Ga, In, Sn, and Tl), and the metalloids Si and Ge cannot be explained by the IPL assumption. The virial-energy correlation coefficients of iron and phosphorous are shown to increase at elevated pressures. Finally, we discuss how scale invariance explains the Grüneisen equation of state and a number of well-known empirical melting and freezing rules.

  1. Strength Scaling in Fiber Composites

    NASA Technical Reports Server (NTRS)

    Kellas, Sotiris; Morton, John

    1990-01-01

    A research program was initiated to study and isolate the factors responsible for scale effects in the tensile strength of graphite/epoxy composite laminates. Four layups were chosen with appropriate stacking sequences so as to highlight individual and interacting failure modes. Four scale sizes were selected for investigation including full scale size, 3/4, 2/4, and 1/4, with n = to 4, 3, 2, and 1, respectively. The full scale specimen sizes was 32 piles thick as compared to 24, 16, and 8 piles for the 3/4, 2/4, and 1/4 specimen sizes respectively. Results were obtained in the form of tensile strength, stress-strain curves and damage development. Problems associated with strength degradation with increasing specimen sizes are isolated and discussed. Inconsistencies associated with strain measurements were also identified. Enhanced x ray radiography was employed for damage evaluation, following step loading. It was shown that fiber dominated layups were less sensitive to scaling effects compared to the matrix dominated layups.

  2. Featured Invention: Laser Scaling Device

    NASA Technical Reports Server (NTRS)

    Dunn, Carol Anne

    2008-01-01

    In September 2003, NASA signed a nonexclusive license agreement with Armor Forensics, a subsidiary of Armor Holdings, Inc., for the laser scaling device under the Innovative Partnerships Program. Coupled with a measuring program, also developed by NASA, the unit provides crime scene investigators with the ability to shoot photographs at scale without having to physically enter the scene, analyzing details such as bloodspatter patterns and graffiti. This ability keeps the scene's components intact and pristine for the collection of information and evidence. The laser scaling device elegantly solved a pressing problem for NASA's shuttle operations team and also provided industry with a useful tool. For NASA, the laser scaling device is still used to measure divots or damage to the shuttle's external tank and other structures around the launchpad. When the invention also met similar needs within industry, the Innovative Partnerships Program provided information to Armor Forensics for licensing and marketing the laser scaling device. Jeff Kohler, technology transfer agent at Kennedy, added, "We also invited a representative from the FBI's special photography unit to Kennedy to meet with Armor Forensics and the innovator. Eventually the FBI ended up purchasing some units. Armor Forensics is also beginning to receive interest from DoD [Department of Defense] for use in military crime scene investigations overseas."

  3. Scaling Effect In Trade Network

    NASA Astrophysics Data System (ADS)

    Konar, M.; Lin, X.; Rushforth, R.; Ruddell, B. L.; Reimer, J.

    2015-12-01

    Scaling is an important issue in the physical sciences. Economic trade is increasingly of interest to the scientific community due to the natural resources (e.g. water, carbon, nutrients, etc.) embodied in traded commodities. Trade refers to the spatial and temporal redistribution of commodities, and is typically measured annually between countries. However, commodity exchange networks occur at many different scales, though data availability at finer temporal and spatial resolution is rare. Exchange networks may prove an important adaptation measure to cope with future climate and economic shocks. As such, it is essential to understand how commodity exchange networks scale, so that we can understand opportunities and roadblocks to the spatial and temporal redistribution of goods and services. To this end, we present an empirical analysis of trade systems across three spatial scales: global, sub-national in the United States, and county-scale in the United States. We compare and contrast the network properties, the self-sufficiency ratio, and performance of the gravity model of trade for these three exchange systems.

  4. Visions of Atomic Scale Tomography

    SciTech Connect

    Kelly, T. F.; Miller, Michael K; Rajan, Krishna; Ringer, S. P.

    2012-01-01

    A microscope, by definition, provides structural and analytical information about objects that are too small to see with the unaided eye. From the very first microscope, efforts to improve its capabilities and push them to ever-finer length scales have been pursued. In this context, it would seem that the concept of an ultimate microscope would have received much attention by now; but has it really ever been defined? Human knowledge extends to structures on a scale much finer than atoms, so it might seem that a proton-scale microscope or a quark-scale microscope would be the ultimate. However, we argue that an atomic-scale microscope is the ultimate for the following reason: the smallest building block for either synthetic structures or natural structures is the atom. Indeed, humans and nature both engineer structures with atoms, not quarks. So far as we know, all building blocks (atoms) of a given type are identical; it is the assembly of the building blocks that makes a useful structure. Thus, would a microscope that determines the position and identity of every atom in a structure with high precision and for large volumes be the ultimate microscope? We argue, yes. In this article, we consider how it could be built, and we ponder the answer to the equally important follow-on questions: who would care if it is built, and what could be achieved with it?

  5. Definition of a nucleophilicity scale.

    PubMed

    Jaramillo, Paula; Pérez, Patricia; Contreras, Renato; Tiznado, William; Fuentealba, Patricio

    2006-07-01

    This work deals with exploring some empirical scales of nucleophilicity. We have started evaluating the experimental indices of nucleophilicity proposed by Legon and Millen on the basis of the measure of the force constants derived from vibrational frequencies using a probe dipole H-X (X = F,CN). The correlation among some theoretical parameters with this experimental scale has been evaluated. The theoretical parameters have been chosen as the minimum of the electrostatic potential V(min), the binding energy (BE) between the nucleophile and the H-X dipole, and the electrostatic potential measured at the position of the hydrogen atom V(H) when the complex nucleophile and dipole H-X is in the equilibrium geometry. All of them present good correlations with the experimental nucleophilicity scale. In addition, the BEs of the nucleophiles with two other Lewis acids (one hard, BF(3), and the other soft, BH(3)) have been evaluated. The results suggest that the Legon and Millen nucleophilicity scale and the electrostatic potential derived scales can describe in good approximation the reactivity order of the nucleophiles only when the interactions with a probe electrophile is of the hard-hard type. For a covalent interaction that is orbital controlled, a new nucleophilicity index using information of the frontier orbitals of both, the nucleophile and the electrophile has been proposed.

  6. The scaling of attention networks

    NASA Astrophysics Data System (ADS)

    Wang, Cheng-Jun; Wu, Lingfei

    2016-04-01

    We use clicks as a proxy of collective attention and construct networks to study the temporal dynamics of attention. In particular we collect the browsing records of millions of users on 1000 Web forums in two months. In the constructed networks, nodes are threads and edges represent the switch of users between threads in an hour. The investigated network properties include the number of threads N, the number of users UV, and the number of clicks, PV. We find scaling functions PV ∼ UV θ1, PV ∼N θ3, and UV ∼N θ2, in which the scaling exponents are always greater than 1. This means that (1) the studied networks maintain a self-similar flow structure in time, i.e., large networks are simply the scale-up versions of small networks; and (2) large networks are more "productive", in the sense that an average user would generate more clicks in the larger systems. We propose a revised version of Zipf's law to quantify the time-invariant flow structure of attention networks and relate it to the observed scaling properties. We also demonstrate the applied consequences of our research: forum-classification based on scaling properties.

  7. SCALE FORMATION IN CHRYSOPHYCEAN ALGAE

    PubMed Central

    Brown, R. Malcolm; Franke, Werner W.; Kleinig, Hans; Falk, Heinz; Sitte, Peter

    1970-01-01

    The cell wall of the marine chrysophycean alga Pleurochrysis scherfellii is composed of distinct wall fragments embedded in a gelatinous mass. The latter is a polysaccharide of pectic character which is rich in galactose and ribose. These wall fragments are identified as scales. They have been isolated and purified from the vegetative mother cell walls after zoospore formation. Their ultrastructure is described in an electron microscope study combining sectioning, freeze-etch, and negative staining techniques. The scales consist of a layer of concentrically arranged microfibrils (ribbons with cross-sections of 12 to 25 x 25 to 40 A) and underlying radial fibrils of similar dimensions. Such a network-plate is densely coated with particles which are assumed to be identical to the pectic component. The microfibrils are resistant to strong alkaline treatment and have been identified as cellulose by different methods, including sugar analysis after total hydrolysis, proton resonance spectroscopical examination (NMR spectroscopy) of the benzoylated product, and diverse histochemical tests. The formation and secretion of the scales can be followed along the maturing Golgi cisternae starting from a pronounced dilated "polymerization center" as a completely intracisternal process which ends in the exocytotic extrusion of the scales. The scales reveal the very same ultrastructure within the Golgi cisternae as they do in the cell wall. The present finding represents the first evidence on cellulose formation by the Golgi apparatus and is discussed in relation to a basic scheme for cellulose synthesis in plant cells in general. PMID:5513606

  8. Scales of Natural Flood Management

    NASA Astrophysics Data System (ADS)

    Nicholson, Alex; Quinn, Paul; Owen, Gareth; Hetherington, David; Piedra Lara, Miguel; O'Donnell, Greg

    2016-04-01

    The scientific field of Natural flood Management (NFM) is receiving much attention and is now widely seen as a valid solution to sustainably manage flood risk whilst offering significant multiple benefits. However, few examples exist looking at NFM on a large scale (>10km2). Well-implemented NFM has the effect of restoring more natural catchment hydrological and sedimentological processes, which in turn can have significant flood risk and WFD benefits for catchment waterbodies. These catchment scale improvements in-turn allow more 'natural' processes to be returned to rivers and streams, creating a more resilient system. Although certain NFM interventions may appear distant and disconnected from main stem waterbodies, they will undoubtedly be contributing to WFD at the catchment waterbody scale. This paper offers examples of NFM, and explains how they can be maximised through practical design across many scales (from feature up to the whole catchment). New tools to assist in the selection of measures and their location, and to appreciate firstly, the flooding benefit at the local catchment scale and then show a Flood Impact Model that can best reflect the impacts of local changes further downstream. The tools will be discussed in the context of our most recent experiences on NFM projects including river catchments in the north east of England and in Scotland. This work has encouraged a more integrated approach to flood management planning that can use both traditional and novel NFM strategies in an effective and convincing way.

  9. Coping with Multiple Sclerosis Scale

    PubMed Central

    Parkerson, Holly A.; Kehler, Melissa D.; Sharpe, Donald

    2016-01-01

    Background: The Coping with Multiple Sclerosis Scale (CMSS) was developed to assess coping strategies specific to multiple sclerosis (MS). Despite its wide application in MS research, psychometric support for the CMSS remains limited to the initial factor analytic investigation by Pakenham in 2001. Methods: The current investigation assessed the factor structure and construct validity of the CMSS. Participants with MS (N = 453) completed the CMSS, as well as measures of disability related to MS (Multiple Sclerosis Impact Scale), quality of life (World Health Organization Quality of Life Brief Scale), and anxiety and depression (Hospital Anxiety and Depression Scale). Results: The original factor structure reported by Pakenham was a poor fit to the data. An alternate seven-factor structure was identified using exploratory factor analysis. Although there were some similarities with the existing CMSS subscales, differences in factor content and item loadings were found. Relationships between the revised CMSS subscales and additional measures were assessed, and the findings were consistent with previous research. Conclusions: Refinement of the CMSS is suggested, especially for subscales related to acceptance and avoidance strategies. Until further research is conducted on the revised CMSS, it is recommended that the original CMSS continue to be administered. Clinicians and researchers should be mindful of lack of support for the acceptance and avoidance subscales and should seek additional scales to assess these areas. PMID:27551244

  10. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe.

  11. An environmentally friendly scale inhibitor

    SciTech Connect

    Dobbs, J.B.; Brown, J.M.

    1999-11-01

    This paper describes a method of inhibiting the formation of scales such as barium and strontium sulfate in low pH aqueous systems, and calcium carbonate in systems containing high concentrations of dissolved iron. The solution, chemically, involves treating the aqueous system with an inhibitor designed to replace organic-phosphonates. Typical low pH aqueous systems where the inhibitor is particularly useful are oilfield produced-water, resin bed water softeners that form scale during low pH, acid regeneration operations. Downhole applications are recommended where high concentrations of dissolved iron are present in the produced water. This new approach to inhibition replaces typical organic phosphonates and polymers with a non-toxic, biodegradable scale inhibitor that performs in harsh environments.

  12. Softness Correlations Across Length Scales

    NASA Astrophysics Data System (ADS)

    Ivancic, Robert; Shavit, Amit; Rieser, Jennifer; Schoenholz, Samuel; Cubuk, Ekin; Durian, Douglas; Liu, Andrea; Riggleman, Robert

    In disordered systems, it is believed that mechanical failure begins with localized particle rearrangements. Recently, a machine learning method has been introduced to identify how likely a particle is to rearrange given its local structural environment, quantified by softness. We calculate the softness of particles in simulations of atomic Lennard-Jones mixtures, molecular Lennard-Jones oligomers, colloidal systems and granular systems. In each case, we find that the length scale characterizing spatial correlations of softness is approximately a particle diameter. These results provide a rationale for why localized rearrangements--whose size is presumably set by the scale of softness correlations--might occur in disordered systems across many length scales. Supported by DOE DE-FG02-05ER46199.

  13. Urban Transfer Entropy across Scales

    PubMed Central

    Murcio, Roberto

    2015-01-01

    The morphology of urban agglomeration is studied here in the context of information exchange between different spatio-temporal scales. Urban migration to and from cities is characterised as non-random and following non-random pathways. Cities are multidimensional non-linear phenomena, so understanding the relationships and connectivity between scales is important in determining how the interplay of local/regional urban policies may affect the distribution of urban settlements. In order to quantify these relationships, we follow an information theoretic approach using the concept of Transfer Entropy. Our analysis is based on a stochastic urban fractal model, which mimics urban growing settlements and migration waves. The results indicate how different policies could affect urban morphology in terms of the information generated across geographical scales. PMID:26207628

  14. Flavor hierarchies from dynamical scales

    NASA Astrophysics Data System (ADS)

    Panico, Giuliano; Pomarol, Alex

    2016-07-01

    One main obstacle for any beyond the SM (BSM) scenario solving the hierarchy problem is its potentially large contributions to electric dipole moments. An elegant way to avoid this problem is to have the light SM fermions couple to the BSM sector only through bilinears, overline{f}f . This possibility can be neatly implemented in composite Higgs models. We study the implications of dynamically generating the fermion Yukawa couplings at different scales, relating larger scales to lighter SM fermions. We show that all flavor and CP-violating constraints can be easily accommodated for a BSM scale of few TeV, without requiring any extra symmetry. Contributions to B physics are mainly mediated by the top, giving a predictive pattern of deviations in Δ F = 2 and Δ F = 1 flavor observables that could be seen in future experiments.

  15. Critical Multicultural Education Competencies Scale: A Scale Development Study

    ERIC Educational Resources Information Center

    Acar-Ciftci, Yasemin

    2016-01-01

    The purpose of this study is to develop a scale in order to identify the critical mutlicultural education competencies of teachers. For this reason, first of all, drawing on the knowledge in the literature, a new conceptual framework was created with deductive method based on critical theory, critical race theory and critical multicultural…

  16. Bath County Computer Attitude Scale: A Reliability and Validity Scale.

    ERIC Educational Resources Information Center

    Moroz, Pauline A.; Nash, John B.

    The Bath County Computer Attitude Scale (BCCAS) has received limited attention concerning its reliability and validity with a U.S. adult population. As developed by G. G. Bear, H. C. Richards, and P. Lancaster in 1987, the instrument assessed attitudes toward computers in areas of computer use, computer-aided instruction, programming and technical…

  17. Childhood Career Development Scale: Scale Construction and Psychometric Properties

    ERIC Educational Resources Information Center

    Schultheiss, Donna E. Palladino; Stead, Graham B.

    2004-01-01

    The purpose of this investigation was to construct a theoretically driven and psychometrically sound childhood career development scale to measure career progress in fourth-through sixth-grade children. Super's nine dimensions (i.e., curiosity, exploration, information, key figures, interests, locus of control, time perspective, self-concept, and…

  18. Cavitation erosion size scale effects

    NASA Technical Reports Server (NTRS)

    Rao, P. V.; Buckley, D. H.

    1984-01-01

    Size scaling in cavitation erosion is a major problem confronting the design engineers of modern high speed machinery. An overview and erosion data analysis presented in this paper indicate that the size scale exponent n in the erosion rate relationship as a function of the size or diameter can vary from 1.7 to 4.9 depending on the type of device used. There is, however, a general agreement as to the values of n if the correlations are made with constant cavitation number.

  19. Scaling Properties of Universal Tetramers

    SciTech Connect

    Hadizadeh, M. R.; Yamashita, M. T.; Tomio, Lauro; Delfino, A.; Frederico, T.

    2011-09-23

    We evidence the existence of a universal correlation between the binding energies of successive four-boson bound states (tetramers), for large two-body scattering lengths (a), related to an additional scale not constrained by three-body Efimov physics. Relevant to ultracold atom experiments, the atom-trimer relaxation peaks for |a|{yields}{infinity} when the ratio between the tetramer and trimer energies is {approx_equal}4.6 and a new tetramer is formed. The new scale is also revealed for a<0 by the prediction of a correlation between the positions of two successive peaks in the four-atom recombination process.

  20. Inflation at the electroweak scale

    NASA Technical Reports Server (NTRS)

    Knox, Lloyd; Turner, Michael S.

    1993-01-01

    We present a model for slow-rollover inflation where the vacuum energy that drives inflation is of the order of G(F) exp -2; unlike most models, the conversion of vacuum energy to radiation ('reheating') is moderately efficient. The scalar field responsible for inflation is a standard-model singlet, develops a vacuum expectation value of 4 x 10 exp 6 GeV, has a mass of about 1 GeV, and can play a role in electroweak phenomena. We also discuss models where the energy scale of inflation is somewhat larger, but still well below the unification scale.

  1. Global scale predictability of floods

    NASA Astrophysics Data System (ADS)

    Weerts, Albrecht; Gijsbers, Peter; Sperna Weiland, Frederiek

    2016-04-01

    Flood (and storm surge) forecasting at the continental and global scale has only become possible in recent years (Emmerton et al., 2016; Verlaan et al., 2015) due to the availability of meteorological forecast, global scale precipitation products and global scale hydrologic and hydrodynamic models. Deltares has setup GLOFFIS a research-oriented multi model operational flood forecasting system based on Delft-FEWS in an open experimental ICT facility called Id-Lab. In GLOFFIS both the W3RA and PCRGLOB-WB model are run in ensemble mode using GEFS and ECMWF-EPS (latency 2 days). GLOFFIS will be used for experiments into predictability of floods (and droughts) and their dependency on initial state estimation, meteorological forcing and the hydrologic model used. Here we present initial results of verification of the ensemble flood forecasts derived with the GLOFFIS system. Emmerton, R., Stephens, L., Pappenberger, F., Pagano, T., Weerts, A., Wood, A. Salamon, P., Brown, J., Hjerdt, N., Donnelly, C., Cloke, H. Continental and Global Scale Flood Forecasting Systems, WIREs Water (accepted), 2016 Verlaan M, De Kleermaeker S, Buckman L. GLOSSIS: Global storm surge forecasting and information system 2015, Australasian Coasts & Ports Conference, 15-18 September 2015,Auckland, New Zealand.

  2. The Maternal Behavior Rating Scale.

    ERIC Educational Resources Information Center

    Mahoney, Gerald; And Others

    1986-01-01

    Independent ratings of videotaped sessions in which mothers (N=60) interacted with their mentally retarded children (ages 1-3) suggested that potentially important components of maternal behavior (child orientedness/pleasure and control) may be assessed with the seven-item short form of the Maternal Behavior Rating Scale. (JW)

  3. The Creative Processes Rating Scale.

    ERIC Educational Resources Information Center

    Kulp, Margaret; Tarter, Barbara J.

    1986-01-01

    Developed from research about and teacher experience with children and creativity, the Creative Processes Rating Scale was tested with 100 sixth graders and found to be an effective instrument (which can be used by teachers with no experience in art) for assessing the creative processes of children in the visual arts. (Author/CB)

  4. Nanotribology: Rubbing on Small Scale

    ERIC Educational Resources Information Center

    Dickinson, J. Thomas

    2005-01-01

    Nanometer-scale investigations offer the potential of providing first-principles understanding of tribo-systems in terms of fundamental intermolecular forces. Some of the basic issues and motivation for use of scanning probes in the area of nanotribology is presented.

  5. SCALING: Wind Tunnel to Flight

    NASA Astrophysics Data System (ADS)

    Bushnell, Dennis M.

    2006-01-01

    Wind tunnels have wide-ranging functionality, including many applications beyond aeronautics, and historically have been the major source of information for technological aerodynamics/aeronautical applications. There are a myriad of scaling issues/differences from flight to wind tunnel, and their study and impacts are uneven and a function of the particular type of extant flow phenomena. Typically, the most serious discrepancies are associated with flow separation. The tremendous ongoing increases in numerical simulation capability are changing and in many aspects have changed the function of the wind tunnel from a (scaled) "predictor" to a source of computational calibration/validation information with the computation then utilized as the flight prediction/scaling tool. Numerical simulations can increasingly include the influences of the various scaling issues. This wind tunnel role change has been occurring for decades as computational capability improves in all aspects. Additional issues driving this trend are the increasing cost (and time) disparity between physical experiments and computations, and increasingly stringent accuracy requirements.

  6. The Adolescent Drug Involvement Scale.

    ERIC Educational Resources Information Center

    Moberg, D. Paul; Hahn, Lori

    1991-01-01

    Developed Adolescent Drug Involvement Scale (ADIS) to measure level of drug involvement, considered as continuum ranging from no use to severe dependency, in adolescents. Administered ADIS to 453 adolescents referred for treatment. Results indicated acceptable internal consistency and provide preliminary evidence of validity. Scores correlated…

  7. Hydrodynamic aspects of shark scales

    NASA Astrophysics Data System (ADS)

    Raschi, W. G.; Musick, J. A.

    1986-03-01

    Ridge morphometrices on placoid scales from 12 galeoid shark species were examined in order to evaluate their potential value for frictional drag reduction. The geometry of the shark scales is similar to longitudinal grooved surfaces (riblets) that have been previously shown to give 8 percent skin-friction reduction for turbulent boundary layers. The present study of the shark scales was undertaken to determine if the physical dimensions of the ridges on the shark scales are of the right magnitude to be used by the sharks for drag reduction based on previous riblet work. The results indicate that the ridge heights and spacings are normally maintained between the predicted optimal values proposed for voluntary and burst swimming speeds throughout the individual's ontogeny. Moreover, the species which might be considered to be the faster posses smaller and more closely spaced ridges that based on the riblet work would suggest a greater frictional drag reduction value at the high swimming speeds, as compared to their more sluggish counterparts.

  8. Scale invariance and superfluid turbulence

    NASA Astrophysics Data System (ADS)

    Sen, Siddhartha; Ray, Koushik

    2013-11-01

    We construct a Schroedinger field theory invariant under local spatial scaling. It is shown to provide an effective theory of superfluid turbulence by deriving, analytically, the observed Kolmogorov 5/3 law and to lead to a Biot-Savart interaction between the observed filament excitations of the system as well.

  9. Citizen Science Data and Scaling

    NASA Astrophysics Data System (ADS)

    Henderson, S.; Wasser, L. A.

    2013-12-01

    There is rapid growth in the collection of environmental data by non experts. So called ';citizen scientists' are collecting data on plant phenology, precipitation patterns, bird migration and winter feeding, mating calls of frogs in the spring, and numerous other topics and phenomena related to environmental science. This data is generally submitted to online programs (e.g Project BudBurst, COCORaHS, Project Feederwatch, Frogwatch USA, etc.)and is freely available to scientists, educators, land managers, and decisions makers. While the data is often used to address specific science questions, it also provides the opportunity to explore its utility in the context of ecosystem scaling. Citizen science data is being collected and submitted at an unprecedented rate and is of a spatial and temporal scale previously not possible. The amount of citizen science data vastly exceeds what scientists or land managers can collect on their own. As such, it provides opportunities to address scaling in the environmental sciences. This presentation will explore data from several citizen science programs in the context of scaling.

  10. Animal coloration: sexy spider scales.

    PubMed

    Taylor, Lisa A; McGraw, Kevin J

    2007-08-01

    Many male jumping spiders display vibrant colors that are used in visual communication. A recent microscopic study on a jumping spider from Singapore shows that three-layered 'scale sandwiches' of chitin and air are responsible for producing their brilliant iridescent body coloration.

  11. Newspaper Scale and Newspaper Expenditures.

    ERIC Educational Resources Information Center

    Blankenburg, William B.

    Employing data from the 1986 Inland Daily Newspaper Association Cost and Revenue study effects of scale on the costs of various factors in newspaper production were examined. Despite several limitations of the Inland Survey, the following tentative conclusions were reached: total expenses rise faster than circulation, and total revenues rise…

  12. Animal coloration: sexy spider scales.

    PubMed

    Taylor, Lisa A; McGraw, Kevin J

    2007-08-01

    Many male jumping spiders display vibrant colors that are used in visual communication. A recent microscopic study on a jumping spider from Singapore shows that three-layered 'scale sandwiches' of chitin and air are responsible for producing their brilliant iridescent body coloration. PMID:17686428

  13. Secondary School Burnout Scale (SSBS)

    ERIC Educational Resources Information Center

    Aypay, Ayse

    2012-01-01

    The purpose of this study is to develop "Secondary School Burnout Scale." Study group included 728 students out of 14 schools in four cities in Turkey. Both Exploratory Factor Analysis and Confirmatory Factor Analysis were conducted on the data. A seven-factor solution emerged. The seven factors explained 61% of the total variance. The model…

  14. Scaling up of renewable chemicals.

    PubMed

    Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A

    2016-04-01

    The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully. PMID:26874264

  15. Multi-Scale Infrastructure Assessment

    EPA Science Inventory

    The U.S. Environmental Protection Agency’s (EPA) multi-scale infrastructure assessment project supports both water resource adaptation to climate change and the rehabilitation of the nation’s aging water infrastructure by providing tools, scientific data and information to progra...

  16. Hydrodynamic aspects of shark scales

    NASA Technical Reports Server (NTRS)

    Raschi, W. G.; Musick, J. A.

    1986-01-01

    Ridge morphometrices on placoid scales from 12 galeoid shark species were examined in order to evaluate their potential value for frictional drag reduction. The geometry of the shark scales is similar to longitudinal grooved surfaces (riblets) that have been previously shown to give 8 percent skin-friction reduction for turbulent boundary layers. The present study of the shark scales was undertaken to determine if the physical dimensions of the ridges on the shark scales are of the right magnitude to be used by the sharks for drag reduction based on previous riblet work. The results indicate that the ridge heights and spacings are normally maintained between the predicted optimal values proposed for voluntary and burst swimming speeds throughout the individual's ontogeny. Moreover, the species which might be considered to be the faster posses smaller and more closely spaced ridges that based on the riblet work would suggest a greater frictional drag reduction value at the high swimming speeds, as compared to their more sluggish counterparts.

  17. Scaling up of renewable chemicals.

    PubMed

    Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A

    2016-04-01

    The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully.

  18. Time scales of Magmatic Processes

    NASA Astrophysics Data System (ADS)

    Hawkesworth, C. J.

    2002-05-01

    Knowledge of the rates of natural processes is critical to the development of physically realistic models. For magmatic processes, rates are increasingly well determined from short lived isotopes, and from diffusion modified element profiles, on time scales that vary from 10s of 1000s of years to a few years. Our understanding of the melting processes beneath MOR have been revolutionised by the application of U-series isotopes, because they include isotopes with half lives similar to the time scales of melt generation and extraction. For island arcs there is much discussion of how to incorporate suggestions that Ra and Ba are transferred from the slab in a few 1000 years, and yet significantly more time is required to generate the excess Pa isotopes. Once in the crust, crystallisation and differentiation may be driven by cooling, degassing and decompression, and these should be characterised by different time scales. Crystals preserve rich high-resolution records of changing magma compositions, but the time scales of those changes are difficult to establish. Isotope studies have shown that more evolved rock types tend to contain more old crystals that may be 10s of 1000s of years old at the time of eruption. Whether these are xenocrysts, or evidence for long term crystallisation histories remains controversial. Moreover, diffusion modified element profiles, and crystal size distributions, suggest that crystals are often less than a 100 years old. An alternative approach is to consider U-series isotope ratios in the magma, and how these may change with degree of magma evolution. These suggest that differentiation time scales may be up to 200 ky for magmas at the base of the crust, but for magmas that crystallise at shallower levels the time scales are much shorter. In some cases these are in weeks and months, and crystallisation is likely to be due to decompression and degassing. One consequence of the short crystallisation times, is that there may be insufficient

  19. Optimal scaling in ductile fracture

    NASA Astrophysics Data System (ADS)

    Fokoua Djodom, Landry

    This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity

  20. Structural Similitude and Scaling Laws

    NASA Technical Reports Server (NTRS)

    Simitses, George J.

    1998-01-01

    Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in

  1. Time scales in cognitive neuroscience

    PubMed Central

    Papo, David

    2013-01-01

    Cognitive neuroscience boils down to describing the ways in which cognitive function results from brain activity. In turn, brain activity shows complex fluctuations, with structure at many spatio-temporal scales. Exactly how cognitive function inherits the physical dimensions of neural activity, though, is highly non-trivial, and so are generally the corresponding dimensions of cognitive phenomena. As for any physical phenomenon, when studying cognitive function, the first conceptual step should be that of establishing its dimensions. Here, we provide a systematic presentation of the temporal aspects of task-related brain activity, from the smallest scale of the brain imaging technique's resolution, to the observation time of a given experiment, through the characteristic time scales of the process under study. We first review some standard assumptions on the temporal scales of cognitive function. In spite of their general use, these assumptions hold true to a high degree of approximation for many cognitive (viz. fast perceptual) processes, but have their limitations for other ones (e.g., thinking or reasoning). We define in a rigorous way the temporal quantifiers of cognition at all scales, and illustrate how they qualitatively vary as a function of the properties of the cognitive process under study. We propose that each phenomenon should be approached with its own set of theoretical, methodological and analytical tools. In particular, we show that when treating cognitive processes such as thinking or reasoning, complex properties of ongoing brain activity, which can be drastically simplified when considering fast (e.g., perceptual) processes, start playing a major role, and not only characterize the temporal properties of task-related brain activity, but also determine the conditions for proper observation of the phenomena. Finally, some implications on the design of experiments, data analyses, and the choice of recording parameters are discussed. PMID:23626578

  2. Earthquake Scaling, Simulation and Forecasting

    NASA Astrophysics Data System (ADS)

    Sachs, Michael Karl

    Earthquakes are among the most devastating natural events faced by society. In 2011, just two events, the magnitude 6.3 earthquake in Christcurch New Zealand on February 22, and the magnitude 9.0 Tohoku earthquake off the coast of Japan on March 11, caused a combined total of $226 billion in economic losses. Over the last decade, 791,721 deaths were caused by earthquakes. Yet, despite their impact, our ability to accurately predict when earthquakes will occur is limited. This is due, in large part, to the fact that the fault systems that produce earthquakes are non-linear. The result being that very small differences in the systems now result in very big differences in the future, making forecasting difficult. In spite of this, there are patterns that exist in earthquake data. These patterns are often in the form of frequency-magnitude scaling relations that relate the number of smaller events observed to the number of larger events observed. In many cases these scaling relations show consistent behavior over a wide range of scales. This consistency forms the basis of most forecasting techniques. However, the utility of these scaling relations is limited by the size of the earthquake catalogs which, especially in the case of large events, are fairly small and limited to a few 100 years of events. In this dissertation I discuss three areas of earthquake science. The first is an overview of scaling behavior in a variety of complex systems, both models and natural systems. The focus of this area is to understand how this scaling behavior breaks down. The second is a description of the development and testing of an earthquake simulator called Virtual California designed to extend the observed catalog of earthquakes in California. This simulator uses novel techniques borrowed from statistical physics to enable the modeling of large fault systems over long periods of time. The third is an evaluation of existing earthquake forecasts, which focuses on the Regional

  3. Evaluating the impact of farm scale innovation at catchment scale

    NASA Astrophysics Data System (ADS)

    van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik

    2014-05-01

    Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear

  4. Mokken Scale Analysis Using Hierarchical Clustering Procedures

    ERIC Educational Resources Information Center

    van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T.; van der Ark, L. Andries

    2004-01-01

    Mokken scale analysis (MSA) can be used to assess and build unidimensional scales from an item pool that is sensitive to multiple dimensions. These scales satisfy a set of scaling conditions, one of which follows from the model of monotone homogeneity. An important drawback of the MSA program is that the sequential item selection and scale…

  5. Stability of Rasch Scales over Time

    ERIC Educational Resources Information Center

    Taylor, Catherine S.; Lee, Yoonsun

    2010-01-01

    Item response theory (IRT) methods are generally used to create score scales for large-scale tests. Research has shown that IRT scales are stable across groups and over time. Most studies have focused on items that are dichotomously scored. Now Rasch and other IRT models are used to create scales for tests that include polytomously scored items.…

  6. 27 CFR 19.276 - Package scales.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... Scales used to weigh packages designed to hold 10 wine gallons or less shall indicate weight in ounces or... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Package scales. 19.276... Package scales. Proprietors shall ensure the accuracy of scales used for weighing packages of...

  7. 30 CFR 57.3202 - Scaling tools.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Ground Control Scaling and Support-Surface and Underground § 57.3202 Scaling tools. Where manual scaling is performed, a scaling...

  8. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  9. Emerging universe from scale invariance

    SciTech Connect

    Del Campo, Sergio; Herrera, Ramón; Guendelman, Eduardo I.; Labraña, Pedro E-mail: guendel@bgu.ac.il E-mail: plabrana@ubiobio.cl

    2010-06-01

    We consider a scale invariant model which includes a R{sup 2} term in action and show that a stable ''emerging universe'' scenario is possible. The model belongs to the general class of theories, where an integration measure independent of the metric is introduced. To implement scale invariance (S.I.), a dilaton field is introduced. The integration of the equations of motion associated with the new measure gives rise to the spontaneous symmetry breaking (S.S.B) of S.I. After S.S.B. of S.I. in the model with the R{sup 2} term (and first order formalism applied), it is found that a non trivial potential for the dilaton is generated. The dynamics of the scalar field becomes non linear and these non linearities are instrumental in the stability of some of the emerging universe solutions, which exists for a parameter range of the theory.

  10. The scale of cosmic isotropy

    SciTech Connect

    Marinoni, C.; Bel, J.; Buzzi, A. E-mail: Julien.Bel@cpt.univ-mrs.fr

    2012-10-01

    The most fundamental premise to the standard model of the universe states that the large-scale properties of the universe are the same in all directions and at all comoving positions. Demonstrating this hypothesis has proven to be a formidable challenge. The cross-over scale R{sub iso} above which the galaxy distribution becomes statistically isotropic is vaguely defined and poorly (if not at all) quantified. Here we report on a formalism that allows us to provide an unambiguous operational definition and an estimate of R{sub iso}. We apply the method to galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7, finding that R{sub iso} ∼ 150h{sup −1}Mpc. Besides providing a consistency test of the Copernican principle, this result is in agreement with predictions based on numerical simulations of the spatial distribution of galaxies in cold dark matter dominated cosmological models.

  11. SENSATION SEEKING SCALE: INDIAN ADAPTATION

    PubMed Central

    Basu, Debasish; Verma, Vijoy K.; Malhotra, Savita; Malhotra, Anil

    1993-01-01

    SUMMARY Sensation seeking refers to a biologically based personality dimension defined as the need for varied, novel and complex sensations and experiences, and the willingness to take physical and social risks for the sake of such experiences. Although researched worldwide for nearly three decades now, there is to date no published Indian study utilizing the concept of sensation seeking. This paper describes adaptation of the Sensation Seeking Scale for the Indian population. After due modification of the scale, its reliability, internal consistency and discriminant validity were established Norms were developed for a defined segment of general population. This study may be seen as the beginning of research in India on the subject of sensation seeking. PMID:21743627

  12. Thermodynamics from a scaling Hamiltonian

    NASA Astrophysics Data System (ADS)

    Del Pino, L. A.; Troncoso, P.; Curilef, S.

    2007-11-01

    There are problems with defining the thermodynamic limit of systems with long-range interactions; as a result, the thermodynamic behavior of these types of systems is anomalous. In the present work, we review some concepts from both extensive and nonextensive thermodynamic perspectives. We use a model, whose Hamiltonian takes into account spins ferromagnetically coupled in a chain via a power law that decays at large interparticle distance r as 1/rα for α⩾0 . Here, we review old nonextensive scaling. In addition, we propose a Hamiltonian scaled by 2((N/2)1-α-1)/(1-α) that explicitly includes symmetry of the lattice and dependence on the size N of the system. The approach enabled us to improve upon previous results. A numerical test is conducted through Monte Carlo simulations. In the model, periodic boundary conditions are adopted to eliminate surface effects.

  13. Allometric scaling laws of metabolism

    NASA Astrophysics Data System (ADS)

    da Silva, Jafferson Kamphorst Leal; Garcia, Guilherme J. M.; Barbosa, Lauro A.

    2006-12-01

    One of the most pervasive laws in biology is the allometric scaling, whereby a biological variable Y is related to the mass M of the organism by a power law, Y=YM, where b is the so-called allometric exponent. The origin of these power laws is still a matter of dispute mainly because biological laws, in general, do not follow from physical ones in a simple manner. In this work, we review the interspecific allometry of metabolic rates, where recent progress in the understanding of the interplay between geometrical, physical and biological constraints has been achieved. For many years, it was a universal belief that the basal metabolic rate (BMR) of all organisms is described by Kleiber's law (allometric exponent b=3/4). A few years ago, a theoretical basis for this law was proposed, based on a resource distribution network common to all organisms. Nevertheless, the 3/4-law has been questioned recently. First, there is an ongoing debate as to whether the empirical value of b is 3/4 or 2/3, or even nonuniversal. Second, some mathematical and conceptual errors were found these network models, weakening the proposed theoretical arguments. Another pertinent observation is that the maximal aerobically sustained metabolic rate of endotherms scales with an exponent larger than that of BMR. Here we present a critical discussion of the theoretical models proposed to explain the scaling of metabolic rates, and compare the predicted exponents with a review of the experimental literature. Our main conclusion is that although there is not a universal exponent, it should be possible to develop a unified theory for the common origin of the allometric scaling laws of metabolism.

  14. Global scale groundwater flow model

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  15. Quantum critical scaling in graphene.

    PubMed

    Sheehy, Daniel E; Schmalian, Jörg

    2007-11-30

    We show that the emergent relativistic symmetry of electrons in graphene near its quantum critical point (QCP) implies a crucial importance of the Coulomb interaction. We derive scaling laws, valid near the QCP, that dictate the nontrivial magnetic and charge response of interacting graphene. Our analysis yields numerous predictions for how the Coulomb interaction will be manifested in experimental observables such as the diamagnetic response and electronic compressibility. PMID:18233313

  16. Scaling Exponents in Financial Markets

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Kim, Cheol-Hyun; Kim, Soo Yong

    2007-03-01

    We study the dynamical behavior of four exchange rates in foreign exchange markets. A detrended fluctuation analysis (DFA) is applied to detect the long-range correlation embedded in the non-stationary time series. It is for our case found that there exists a persistent long-range correlation in volatilities, which implies the deviation from the efficient market hypothesis. Particularly, the crossover is shown to exist in the scaling behaviors of the volatilities.

  17. Latest Developments in SLD Scaling

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Anderson, David N.

    2006-01-01

    Scaling methods have been shown previously to work well for super cooled large droplet (SLD) main ice shapes. However, feather sizes for some conditions have not been well represented by scale tests. To determine if there are fundamental differences between the development of feathers for appendix C and SLD conditions, this study used time-sequenced photographs, viewing along the span of the model during icing sprays. An airspeed of 100 kt, cloud water drop MVDs of 30 and 140 microns, and stagnation freezing fractions of 0.30 and 0.50 were tested in the NASA Glenn Icing Research Tunnel using an unswept 91-cm-chord NACA0012 airfoil model mounted at 0deg AOA. The photos indicated that the feathers that developed in a distinct region downstream of the leading-edge ice determined the horn location and angle. The angle at which feathers grew from the surface were also measured; results are shown for an airspeed of 150 kt, an MVD of 30 microns, and stagnation freezing fractions of 0.30 to 0.60. Feather angles were found to depend strongly on the stagnation freezing fraction, and were independent of either chordwise position on the model or time into the spray. Feather angles also correlated well with horn angles. For these tests, there did not appear to be fundamental differences between the physics of SLD and appendix C icing; therefore, for these conditions similarity parameters used for appendix C scaling appear to be valid for SLD scaling as well. Further investigation into the cause for the large feather structures observed for some SLD conditions will continue.

  18. An investigation of ride quality rating scales

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Coates, G. D.; Leatherwood, J. D.

    1977-01-01

    An experimental investigation was conducted for the combined purposes of determining the relative merits of various category scales for the prediction of human discomfort response to vibration and for determining the mathematical relationships whereby subjective data are transformed from one scale to other scales. There were 16 category scales analyzed representing various parametric combinations of polarity, that is, unipolar and bipolar, scale type, and number of scalar points. Results indicated that unipolar continuous-type scales containing either seven or nine scalar points provide the greatest reliability and discriminability. Transformations of subjective data between category scales were found to be feasible with unipolar scales of a larger number of scalar points providing the greatest accuracy of transformation. The results contain coefficients for transformation of subjective data between the category scales investigated. A result of particular interest was that the comfort half of a bipolar scale was seldom used by subjects to describe their subjective reaction to vibration.

  19. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  20. Mechanically reliable scales and coatings

    SciTech Connect

    Tortorelli, P.F.; Alexander, K.B.

    1995-06-01

    In many high-temperature fossil energy systems, corrosion and deleterious environmental effects arising from reactions with reactive gases and condensible products often compromise materials performance and, as a consequence, degrade operating efficiencies. Protection of materials from such reactions is best afforded by the formation of stable surface oxides (either as deposited coatings or thermally grown scales) that are slowly reacting, continuous, dense, and adherent to the substrate. However, the ability of normally brittle ceramic films and coatings to provide such protection has long been problematical, particularly for applications involving numerous or severe high-temperature thermal cycles or very aggressive (for example, sulfidizing) environments. A satisfactory understanding of how scale and coating integrity and adherence are improved by compositional, microstructural, and processing modifications is lacking. Therefore, to address this issue, the present work is intended to define the relationships between substrate characteristics (composition, microstructure, and mechanical behavior) and the structure and protective properties of deposited oxide coatings and/or thermally grown scales. Such information is crucial to the optimization of the chemical, interfacial, and mechanical properties of the protective oxides on high-temperature materials through control of processing and composition and directly supports the development of corrosion-resistant, high-temperature materials for improved energy and environmental control systems.

  1. Transition physics and scaling overview

    SciTech Connect

    Carlstrom, T.N.

    1995-12-01

    This paper presents an overview of recent experimental progress towards understanding H-mode transition physics and scaling. Terminology and techniques for studying H-mode are reviewed and discussed. The model of shear E x B flow stabilization of edge fluctuations at the L-H transition is gaining wide acceptance and is further supported by observations of edge rotation on a number of new devices. Observations of poloidal asymmetries of edge fluctuations and dephasing of density and potential fluctuations after the transition pose interesting challenges for understanding H-mode physics. Dedicated scans to determine the scaling of the power threshold have now been performed on many machines. A dear B{sub t} dependence is universally observed but dependence on the line averaged density is complicated. Other dependencies are also reported. Studies of the effect of neutrals and error fields on the power threshold are under investigation. The ITER threshold database has matured and offers guidance to the power threshold scaling issues relevant to next-step devices.

  2. A Lab-Scale CELSS

    NASA Technical Reports Server (NTRS)

    Flynn, Mark E.; Finn, Cory K.; Srinivasan, Venkatesh; Sun, Sidney; Harper, Lynn D. (Technical Monitor)

    1994-01-01

    It has been shown that prohibitive resupply costs for extended-duration manned space flight missions will demand that a high degree of recycling and in situ food production be implemented. A prime candidate for in situ food production is the growth of higher level plants. Research in the area of plant physiology is currently underway at many institutions. This research is aimed at the characterization and optimization of gas exchange, transpiration and food production of higher plants in order to support human life in space. However, there are a number of unresolved issues involved in making plant chambers an integral part of a closed life support system. For example, issues pertaining to the integration of tightly coupled, non-linear systems with small buffer volumes will need to be better understood in order to ensure successful long term operation of a Controlled Ecological Life Support System (CELSS). The Advanced Life Support Division at NASA Ames Research Center has embarked on a program to explore some of these issues and demonstrate the feasibility of the CELSS concept. The primary goal of the Laboratory Scale CELSS Project is to develop a fully-functioning integrated CELSS on a laboratory scale in order to provide insight, knowledge and experience applicable to the design of human-rated CELSS facilities. Phase I of this program involves the integration of a plant chamber with a solid waste processor. This paper will describe the requirements, design and some experimental results from Phase I of the Laboratory Scale CELSS Program.

  3. Temporal scaling in information propagation.

    PubMed

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-01-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers. PMID:24939414

  4. Low on the London Scale

    NASA Astrophysics Data System (ADS)

    Webb, S.

    2013-09-01

    Until relatively recently, many authors have assumed that if extraterrestrial life is discovered it will be via the discovery of extraterrestrial intelligence: we can best try to detect life by adopting the SETI approach of trying to detect beacons or artefacts. The Rio Scale, proposed by Almár and Tarter in 2000, is a tool for quantifying the potential significance for society of any such reported detection. However, improvements in technology and advances in astrobiology raise the possibility that the discovery of extraterrestrial life will instead be via the detection of atmospheric biosignatures. The London Scale, proposed by Almár in 2010, attempts to quantify the potential significance of the discovery of extraterrestrial life rather than extraterrestrial intelligence. What might be the consequences of the announcement of a discovery that ranks low on the London Scale? In other words, what might be society's reaction if 'first contact' is via the remote sensing of the byproducts of unicellular organisms rather than with the products of high intelligence? Here, I examine some possible reactions to that question; in particular, I discuss how such an announcement might affect our views of life here on Earth and of humanity's place in the universe.

  5. Flavor from the electroweak scale

    SciTech Connect

    Bauer, Martin; Carena, Marcela; Gemmler, Katrin

    2015-11-04

    We discuss the possibility that flavor hierarchies arise from the electroweak scale in a two Higgs doublet model, in which the two Higgs doublets jointly act as the flavon. Quark masses and mixing angles are explained by effective Yukawa couplings, generated by higher dimensional operators involving quarks and Higgs doublets. Modified Higgs couplings yield important effects on the production cross sections and decay rates of the light Standard Model like Higgs. In addition, flavor changing neutral currents arise at tree-level and lead to strong constraints from meson-antimeson mixing. Remarkably, flavor constraints turn out to prefer a region in parameter space that is in excellent agreement with the one preferred by recent Higgs precision measurements at the Large Hadron Collider (LHC). Direct searches for extra scalars at the LHC lead to further constraints. Precise predictions for the production and decay modes of the additional Higgs bosons are derived, and we present benchmark scenarios for searches at the LHC Run II. As a result, flavor breaking at the electroweak scale as well as strong coupling effects demand a UV completion at the scale of a few TeV, possibly within the reach of the LHC.

  6. Non-relativistic scale anomalies

    NASA Astrophysics Data System (ADS)

    Arav, Igal; Chapman, Shira; Oz, Yaron

    2016-06-01

    We extend the cohomological analysis in arXiv:1410.5831 of anisotropic Lifshitz scale anomalies. We consider non-relativistic theories with a dynamical critical exponent z = 2 with or without non-relativistic boosts and a particle number symmetry. We distinguish between cases depending on whether the time direction does or does not induce a foliation structure. We analyse both 1 + 1 and 2 + 1 spacetime dimensions. In 1 + 1 dimensions we find no scale anomalies with Galilean boost symmetries. The anomalies in 2 + 1 dimensions with Galilean boosts and a foliation structure are all B-type and are identical to the Lifshitz case in the purely spatial sector. With Galilean boosts and without a foliation structure we find also an A-type scale anomaly. There is an infinite ladder of B-type anomalies in the absence of a foliation structure with or without Galilean boosts. We discuss the relation between the existence of a foliation structure and the causality of the field theory.

  7. Flavor from the electroweak scale

    DOE PAGES

    Bauer, Martin; Carena, Marcela; Gemmler, Katrin

    2015-11-04

    We discuss the possibility that flavor hierarchies arise from the electroweak scale in a two Higgs doublet model, in which the two Higgs doublets jointly act as the flavon. Quark masses and mixing angles are explained by effective Yukawa couplings, generated by higher dimensional operators involving quarks and Higgs doublets. Modified Higgs couplings yield important effects on the production cross sections and decay rates of the light Standard Model like Higgs. In addition, flavor changing neutral currents arise at tree-level and lead to strong constraints from meson-antimeson mixing. Remarkably, flavor constraints turn out to prefer a region in parameter spacemore » that is in excellent agreement with the one preferred by recent Higgs precision measurements at the Large Hadron Collider (LHC). Direct searches for extra scalars at the LHC lead to further constraints. Precise predictions for the production and decay modes of the additional Higgs bosons are derived, and we present benchmark scenarios for searches at the LHC Run II. As a result, flavor breaking at the electroweak scale as well as strong coupling effects demand a UV completion at the scale of a few TeV, possibly within the reach of the LHC.« less

  8. Temporal scaling in information propagation

    NASA Astrophysics Data System (ADS)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  9. Analytic theories of allometric scaling.

    PubMed

    Agutter, Paul S; Tuszynski, Jack A

    2011-04-01

    During the 13 years since it was first advanced, the fractal network theory (FNT), an analytic theory of allometric scaling, has been subjected to a wide range of methodological, mathematical and empirical criticisms, not all of which have been answered satisfactorily. FNT presumes a two-variable power-law relationship between metabolic rate and body mass. This assumption has been widely accepted in the past, but a growing body of evidence during the past quarter century has raised questions about its general validity. There is now a need for alternative theories of metabolic scaling that are consistent with empirical observations over a broad range of biological applications. In this article, we briefly review the limitations of FNT, examine the evidence that the two-variable power-law assumption is invalid, and outline alternative perspectives. In particular, we discuss quantum metabolism (QM), an analytic theory based on molecular-cellular processes. QM predicts the large variations in scaling exponent that are found empirically and also predicts the temperature dependence of the proportionality constant, issues that have eluded models such as FNT that are based on macroscopic and network properties of organisms.

  10. Dystonia rating scales: critique and recommendations

    PubMed Central

    Albanese, Alberto; Sorbo, Francesca Del; Comella, Cynthia; Jinnah, H.A.; Mink, Jonathan W.; Post, Bart; Vidailhet, Marie; Volkmann, Jens; Warner, Thomas T.; Leentjens, Albert F.G.; Martinez-Martin, Pablo; Stebbins, Glenn T.; Goetz, Christopher G.; Schrag, Anette

    2014-01-01

    Background Many rating scales have been applied to the evaluation of dystonia, but only few have been assessed for clinimetric properties. The Movement Disorders Society commissioned this task force to critique existing dystonia rating scales and place them in the clinical and clinimetric context. Methods A systematic literature review was conducted to identify rating scales that have either been validated or used in dystonia. Results Thirty six potential scales were identified. Eight were excluded because they did not meet review criteria, leaving twenty-eight scales that were critiqued and rated by the task force. Seven scales were found to meet criteria to be “recommended”: the Blepharospasm Disability Index is recommended for rating blepharospasm; the Cervical Dystonia Impact Scale and the Toronto Western Spasmodic Torticollis Rating Scale for rating cervical dystonia; the Craniocervical Dystonia Questionnaire for blepharospasm and cervical dystonia; the Voice Handicap Index (VHI) and the Vocal Performance Questionnaire (VPQ) for laryngeal dystonia; and the Fahn-Marsden Dystonia Rating Scale for rating generalized dystonia. Two “recommended” scales (VHI and VPQ) are generic scales validated on few patients with laryngeal dystonia, whereas the others are disease-specific scales. Twelve scales met criteria for “suggested” and seven scales met criteria for “listed”. All the scales are individually reviewed in the online appendix. Conclusion The task force recommends five specific dystonia scales and suggests to further validate in dystonia two recommended generic voice-disorder scales. Existing scales for oromandibular, arm and task-specific dystonia should be refined and fully assessed. Scales should be developed for body regions where no scales are available, such as lower limbs and trunk. PMID:23893443

  11. Coupled length scales in eroding landscapes

    SciTech Connect

    Chan, Kelvin K.; Rothman, Daniel H.

    2001-05-01

    We report results from an empirical study of the anisotropic structure of eroding landscapes. By constructing a novel correlation function, we show quantitatively that small-scale channel-like features of landscapes are coupled to the large-scale structure of drainage basins. We show additionally that this two-scale interaction is scale-dependent. The latter observation suggests that a commonly applied effective equation for erosive transport may itself depend on scale.

  12. Apolipoprotein E genotype predicts 24-month bayley scales infant development score.

    PubMed

    Wright, Robert O; Hu, Howard; Silverman, Edwin K; Tsaih, Shirng W; Schwartz, Joel; Bellinger, David; Palazuelos, Eduardo; Weiss, Scott T; Hernandez-Avila, Mauricio

    2003-12-01

    Apolipoprotein E (APOE) regulates cholesterol and fatty acid metabolism, and may mediate synaptogenesis during neurodevelopment. To our knowledge, the effects of APOE4 isoforms on infant development have not been studied. This study was nested within a cohort of mother-infant pairs living in and around Mexico City. A multiple linear regression model was constructed using the 24-mo Mental Development Index (MDI) of the Bayley Scale as the primary outcome and infant APOE genotype as the primary risk factor of interest. Regression models stratified on APOE genotype were constructed to explore effect modification. Of 311 subjects, 53 (17%) carried at least one copy of the APOE4 allele. Mean (SD) MDI scores among carriers with at least one copy of APOE4 were 94.1 (14.3) and among E3/E2 carriers were 91.2 (14.0). After adjustment for covariates, APOE4 carrier status was associated with a 4.4 point (95% confidence interval: 0.1-8.7; p = 0.04) higher 24-mo MDI. In the stratified regression models, the negative effects for umbilical cord blood lead level on 24-mo MDI score was approximately 4-fold greater among APOE3/APOE2 carriers than among APOE4 carriers. These results suggest that subjects with the E4 isoform of APOE may have advantages over those with the E2 or E3 isoforms with respect to early life neuronal/brain development.

  13. Evaluating the impact of farm scale innovation at catchment scale

    NASA Astrophysics Data System (ADS)

    van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik

    2014-05-01

    Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear

  14. Scale-up considerations: Pilot to commercial scale

    NASA Astrophysics Data System (ADS)

    Weisiger, Dan

    1996-01-01

    The success of Photovoltaic (PV) technology as a viable business enterprise depends largely on its ability to provide a competitive advantage over other current energy technologies in meeting the customers' needs. Successful commercialization of the PV technology, therefore requires, in part, an efficient and effective manufacturing strategy in order to ensure a superior quality, low cost product. Several key design considerations for process scale-up were examined associated with GPI's PV module manufacturing expansion project completed in Spring, 1994. Particular emphasis was given to product specification, process specification, process engineering design, site location selection, environmental/health/safety (EHS) factors, and plant maintenance.

  15. Derivation of physically motivated wind speed scales

    NASA Astrophysics Data System (ADS)

    Dotzek, Nikolai

    A class of new wind speed scales is proposed in which the relevant scaling factors are derived from physical quantities like mass flux density, energy density (pressure), or energy flux density. Hence, they are called Energy- or E-scales, and can be applied to wind speeds of any intensity. It is shown that the Mach scale is a special case of an E-scale. Aside from its foundation in physical quantities which allow for a calibration of the scales, the E-scale concept can help to overcome the present plethora of scales for winds in the range from gale to hurricane intensity. A procedure to convert existing data based on the Fujita-scale or other scales (Saffir-Simpson, TORRO, Beaufort) to their corresponding E-scales is outlined. Even for the large US tornado record, the workload of conversion in case of an adoption of the E-scale would in principle remain manageable (if the necessary metadata to do so were available), as primarily the F5 events would have to be re-rated. Compared to damage scales like the "Enhanced Fujita" or EF-scale concept recently implemented in the USA, the E-scales are based on first principles. They can consistently be applied all over the world for the purpose of climatological homogeneity. To account for international variations in building characteristics, one should not adapt wind speed scale thresholds to certain national building characteristics. Instead, one worldwide applicable wind speed scale based on physical principles should rather be complemented by nationally-adapted damage descriptions. The E-scale concept can provide the basis for such a standardised wind speed scale.

  16. Preliminary Scaling Estimate for Select Small Scale Mixing Demonstration Tests

    SciTech Connect

    Wells, Beric E.; Fort, James A.; Gauglitz, Phillip A.; Rector, David R.; Schonewill, Philip P.

    2013-09-12

    The Hanford Site double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions’ Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems.

  17. Dimensional Review of Scales for Forensic Photography.

    PubMed

    Ferrucci, Massimiliano; Doiron, Theodore D; Thompson, Robert M; Jones, John P; Freeman, Adam J; Neiman, Janice A

    2016-03-01

    Scales for photography provide a geometrical reference in the photographic documentation of a crime scene, pattern, or item of evidence. The ABFO No. 2 Standard Reference Scale (1) is used by the forensic science community as an accurate reference scale. We investigated the overall accuracy of the major centimeter graduations, internal/external diameters of the circles, error in placement of the circle centers, and leg perpendicularity. Four vendors were selected for the scales, and the features were measured on a vision-based coordinate measurement system. The scales were well within the specified tolerance for the length graduations. After 4 years, the same scales were measured to determine what change could be measured. The scales demonstrated acceptable stability in the scale length and center-to-center measurements; however, the perpendicularity exhibited change. The study results indicate that scale quality checks using certified metal rulers are good practice. PMID:27404626

  18. Proposing a tornado watch scale

    NASA Astrophysics Data System (ADS)

    Mason, Jonathan Brock

    This thesis provides an overview of language used in tornado safety recommendations from various sources, along with developing a rubric for scaled tornado safety recommendations, and subsequent development and testing of a tornado watch scale. The rubric is used to evaluate tornado refuge/shelter adequacy responses of Tuscaloosa residents gathered following the April 27, 2011 Tuscaloosa, Alabama EF4 tornado. There was a significant difference in the counts of refuge adequacy for Tuscaloosa residents when holding the locations during the April 27th tornado constant and comparing adequacy ratings for weak (EF0-EF1), strong (EF2-EF3) and violent (EF4-EF5) tornadoes. There was also a significant difference when comparing future tornado refuge plans of those same participants to the adequacy ratings for weak, strong and violent tornadoes. The tornado refuge rubric is then revised into a six-class, hierarchical Tornado Watch Scale (TWS) from Level 0 to Level 5 based on the likelihood of high-impact or low-impact severe weather events containing weak, strong or violent tornadoes. These levels represent maximum expected tornado intensity and include tornado safety recommendations from the tornado refuge rubric. Audio recordings similar to those used in current National Oceanic and Atmospheric Administration (NOAA) weather radio communications were developed to correspond to three levels of the TWS, a current Storm Prediction Center (SPC) tornado watch and a particularly dangerous situation (PDS) tornado watch. These were then used in interviews of Alabama residents to determine how changes to the information contained in the watch statements would affect each participant's tornado safety actions and perception of event danger. Results from interview participants (n=38) indicate a strong preference (97.37%) for the TWS when compared to current tornado watch and PDS tornado watch statements. Results also show the TWS elicits more adequate safety decisions from participants

  19. Scaling on a limestone flooring

    NASA Astrophysics Data System (ADS)

    Carmona-Quiroga, P. M.; Blanco-Varela, M. T.; Martínez-Ramírez, S.

    2012-04-01

    Natural stone can be use on nearly every surface, inside and outside buildings, but decay is more commonly reported from the ones exposed to outdoor aggressively conditions. This study instead, is an example of limestone weathering of uncertain origin in the interior of a residential building. The stone, used as flooring, started to exhibit loss of material in the form of scaling. These damages were observed before the building, localized in the South of Spain (Málaga), was inhabited. Moreover, according to the company the limestone satisfies the following European standards UNE-EN 1341: 2002, UNE-EN 1343: 2003; UNE-EN 12058: 2004 for floorings. Under these circumstances the main objective of this study was to assess the causes of this phenomenon. For this reason the composition of the mortar was determined and the stone was characterized from a mineralogical and petrological point of view. The last material, which is a fossiliferous limestone from Egypt with natural fissure lines, is mainly composed of calcite, being quartz, kaolinite and apatite minor phases. Moreover, under different spectroscopic and microscopic techniques (FTIR, micro-Raman, SEM-EDX, etc) samples of the weathered, taken directly from the buildings, and unweathered limestone tiles were examined and a new mineralogical phase, trona, was identified at scaled areas which are connected with the natural veins of the stone. In fact, through BSE-mapping the presence of sodium has been detected in these veins. This soluble sodium carbonate would was dissolved in the natural waters from which limestone was precipitated and would migrate with the ascendant capilar humidity and crystallized near the surface of the stone starting the scaling phenomenon which in historic masonry could be very damaging. Therefore, the weathering of the limestone would be related with the hygroscopic behaviour of this salt, but not with the constructive methods used. This makes the limestone unable to be used on restoration

  20. Adopted: A practical salinity scale

    NASA Astrophysics Data System (ADS)

    The Unesco/ICES/SCOR/IAPSO Joint Panel on Oceanographic Tables and Standards has recommended the adoption of a Practical Salinity Scale, 1978, and a corresponding new International Equation of State of Seawater, 1980. A full account of the research leading to their recommendation is available in the series Unesco Technical Papers in Marine Science.The parent organizations have accepted the panel's recommendations and have set January 1, 1982, as the date when the new procedures, formulae, and tables should replace those now in use.

  1. Scale-free convection theory

    NASA Astrophysics Data System (ADS)

    Pasetto, Stefano; Chiosi, Cesare; Cropper, Mark; Grebel, Eva K.

    2015-08-01

    Convection is one of the fundamental mechanism to transport energy, e.g., in planetology, oceanography as well as in astrophysics where stellar structure customarily described by the mixing-length theory, which makes use of the mixing-length scale parameter to express the convective flux, velocity, and temperature gradients of the convective elements and stellar medium. The mixing-length scale is taken to be proportional to the local pressure scale height of the star, and the proportionality factor (the mixing-length parameter) must be determined by comparing the stellar models to some calibrator, usually the Sun.No strong arguments exist to claim that the mixing-length parameter is the same in all stars and all evolutionary phases. Because of this, all stellar models in literature are hampered by this basic uncertainty.In a recent paper (Pasetto et al 2014) we presented the first fully analytical scale-free theory of convection that does not require the mixing-length parameter. Our self-consistent analytical formulation of convection determines all the properties of convection as a function of the physical behaviour of the convective elements themselves and the surrounding medium (being it a either a star, an ocean, a primordial planet). The new theory of convection is formulated starting from a conventional solution of the Navier-Stokes/Euler equations, i.e. the Bernoulli equation for a perfect fluid, but expressed in a non-inertial reference frame co-moving with the convective elements. In our formalism, the motion of convective cells inside convective-unstable layers is fully determined by a new system of equations for convection in a non-local and time dependent formalism.We obtained an analytical, non-local, time-dependent solution for the convective energy transport that does not depend on any free parameter. The predictions of the new theory in astrophysical environment are compared with those from the standard mixing-length paradigm in stars with

  2. The NIST Length Scale Interferometer

    PubMed Central

    Beers, John S.; Penzes, William B.

    1999-01-01

    The National Institute of Standards and Technology (NIST) interferometer for measuring graduated length scales has been in use since 1965. It was developed in response to the redefinition of the meter in 1960 from the prototype platinum-iridium bar to the wavelength of light. The history of the interferometer is recalled, and its design and operation described. A continuous program of modernization by making physical modifications, measurement procedure changes and computational revisions is described, and the effects of these changes are evaluated. Results of a long-term measurement assurance program, the primary control on the measurement process, are presented, and improvements in measurement uncertainty are documented.

  3. Scale problem in wormhole physics

    SciTech Connect

    Kim, J. E.; Lee, K.

    1989-07-03

    Wormhole physics from the quantum thoery of gravity coupled to the second-rank antisymmetric tensor or Goldstone-boson fields leads to an effective potential for these fields. The cosmological energy-density bound is shown to put an upper bound on the cosmological constant which wormhole physics can make zero. This upper bound, of order 10/sup 11/ GeV, is far smaller than the Planck scale and barely compatible with the possible cosmological constant arising from grand unified theories. In addition, the effect of wormholes on the axion for the strong /ital CP/ problem is discussed.

  4. Supergroups and economies of scale.

    PubMed

    Schlossberg, Steven

    2009-02-01

    With the changing environment for medical practice, physician practice models will continue to evolve. These "supergoups'' create economies of scale, but their advantage is not only in the traditional economic sense. Practices with enough size are able to better meet the challenges of medical practice with increasing regulatory demands, explosion of clinical knowledge, quality and information technology initiatives, and an increasingly tight labor market. Smaller practices can adapt some of these strategies selectively. Depending on the topic, smaller practices should think differently about how to approach the challenges of practice.

  5. New Scalings in Nuclear Fragmentation

    SciTech Connect

    Bonnet, E.; Bougault, R.; Galichet, E.; Gagnon-Moisan, F.; Guinet, D.; Lautesse, P.; Marini, P.; Parlog, M.

    2010-10-01

    Fragment partitions of fragmenting hot nuclei produced in central and semiperipheral collisions have been compared in the excitation energy region 4-10 MeV per nucleon where radial collective expansion takes place. It is shown that, for a given total excitation energy per nucleon, the amount of radial collective energy fixes the mean fragment multiplicity. It is also shown that, at a given total excitation energy per nucleon, the different properties of fragment partitions are completely determined by the reduced fragment multiplicity (i.e., normalized to the source size). Freeze-out volumes seem to play a role in the scalings observed.

  6. Drift-Scale Radionuclide Transport

    SciTech Connect

    J. Houseworth

    2004-09-22

    The purpose of this model report is to document the drift scale radionuclide transport model, taking into account the effects of emplacement drifts on flow and transport in the vicinity of the drift, which are not captured in the mountain-scale unsaturated zone (UZ) flow and transport models ''UZ Flow Models and Submodels'' (BSC 2004 [DIRS 169861]), ''Radionuclide Transport Models Under Ambient Conditions'' (BSC 2004 [DIRS 164500]), and ''Particle Tracking Model and Abstraction of Transport Process'' (BSC 2004 [DIRS 170041]). The drift scale radionuclide transport model is intended to be used as an alternative model for comparison with the engineered barrier system (EBS) radionuclide transport model ''EBS Radionuclide Transport Abstraction'' (BSC 2004 [DIRS 169868]). For that purpose, two alternative models have been developed for drift-scale radionuclide transport. One of the alternative models is a dual continuum flow and transport model called the drift shadow model. The effects of variations in the flow field and fracture-matrix interaction in the vicinity of a waste emplacement drift are investigated through sensitivity studies using the drift shadow model (Houseworth et al. 2003 [DIRS 164394]). In this model, the flow is significantly perturbed (reduced) beneath the waste emplacement drifts. However, comparisons of transport in this perturbed flow field with transport in an unperturbed flow field show similar results if the transport is initiated in the rock matrix. This has led to a second alternative model, called the fracture-matrix partitioning model, that focuses on the partitioning of radionuclide transport between the fractures and matrix upon exiting the waste emplacement drift. The fracture-matrix partitioning model computes the partitioning, between fractures and matrix, of diffusive radionuclide transport from the invert (for drifts without seepage) into the rock water. The invert is the structure constructed in a drift to provide the floor of the

  7. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1929-01-01

    Modified propeller and spinner in Full-Scale Tunnel (FST) model. On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel. 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow. This model can be constructed in a comparatively short time, using 2 by 4 framing with matched sheathing inside, and where circular sections are desired they can be obtained by nailing sheet metal to wooden ribs, which can be cut on the band saw. It is estimated that three months will be required for the construction and testing of such a model and that the cost will be approximately three thousand dollars, one thousand dollars of which will be for the motors. No suitable location appears to exist in any of our present buildings, and it may be necessary to build it outside and cover it with a roof.' George Lewis responded immediately (June 27) granting the authority to proceed. He urged Langley to expedite construction and to employ extra carpenters if necessary. Funds for the model came from the FST project

  8. Multi-scale Shock Technique

    2009-08-01

    The code to be released is a new addition to the LAMMPS molecular dynamics code. LAMMPS is developed and maintained by Sandia, is publicly available, and is used widely by both natioanl laboratories and academics. The new addition to be released enables LAMMPS to perform molecular dynamics simulations of shock waves using the Multi-scale Shock Simulation Technique (MSST) which we have developed and has been previously published. This technique enables molecular dynamics simulations of shockmore » waves in materials for orders of magnitude longer timescales than the direct, commonly employed approach.« less

  9. Hydrological Modeling of Continental-Scale Basins

    NASA Astrophysics Data System (ADS)

    Wood, Eric F.; Lettenmaier, Dennis; Liang, Xu; Nijssen, Bart; Wetzel, Suzanne W.

    Hydrological models at continental scales are traditionally used for water resources planning. However, continental-scale hydrological models may be useful in assessing the impacts from future climate change on catchment hydrology and water resources or from human activity on hydrology and biogeochemical cycles at large scales. Development of regional-scale terrestrial hydrological models will further our understanding of the Earth's water cycle. Continental scales allow for better understanding of the geographic distribution of land-atmospheric moisture fluxes, improved water management at continental scales, better quantification of the impact of human activity and climate change on the water cycle, and improved simulation of weather and climate.

  10. Identifying characteristic scales in the human genome

    NASA Astrophysics Data System (ADS)

    Carpena, P.; Bernaola-Galván, P.; Coronado, A. V.; Hackenberg, M.; Oliver, J. L.

    2007-03-01

    The scale-free, long-range correlations detected in DNA sequences contrast with characteristic lengths of genomic elements, being particularly incompatible with the isochores (long, homogeneous DNA segments). By computing the local behavior of the scaling exponent α of detrended fluctuation analysis (DFA), we discriminate between sequences with and without true scaling, and we find that no single scaling exists in the human genome. Instead, human chromosomes show a common compositional structure with two characteristic scales, the large one corresponding to the isochores and the other to small and medium scale genomic elements.

  11. How resilient are resilience scales? The Big Five scales outperform resilience scales in predicting adjustment in adolescents.

    PubMed

    Waaktaar, Trine; Torgersen, Svenn

    2010-04-01

    This study's aim was to determine whether resilience scales could predict adjustment over and above that predicted by the five-factor model (FFM). A sample of 1,345 adolescents completed paper-and-pencil scales on FFM personality (Hierarchical Personality Inventory for Children), resilience (Ego-Resiliency Scale [ER89] by Block & Kremen, the Resilience Scale [RS] by Wagnild & Young) and adaptive behaviors (California Healthy Kids Survey, UCLA Loneliness Scale and three measures of school adaptation). The results showed that the FFM scales accounted for the highest proportion of variance in disturbance. For adaptation, the resilience scales contributed as much as the FFM. In no case did the resilience scales outperform the FFM by increasing the explained variance. The results challenge the validity of the resilience concept as an indicator of human adaptation and avoidance of disturbance, although the concept may have heuristic value in combining favorable aspects of a person's personality endowment.

  12. Scaling analysis of stock markets.

    PubMed

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.

  13. Impedance Scaling and Impedance Control

    NASA Astrophysics Data System (ADS)

    Chou, W.; Griffin, J.

    1997-05-01

    When a machine becomes really large, such as the Really Large Hadron Collider (RLHC),(G. W. Foster and E. Malamud, Fermilab-TM-1976 (June, 1996).) of which the circumference could reach the order of megameters, beam instability could be an essential bottleneck. This paper studies the scaling of the instability threshold vs. machine size when the coupling impedance scales in a ``normal'' way. It is shown that the beam would be intrinsically unstable for the RLHC. As a possible solution to this problem, it is proposed to introduce local impedance inserts for controlling the machine impedance. In the longitudinal plane, this could be done by using a heavily detuned rf cavity (e.g., a biconical structure), which could provide large imaginary impedance with the right sign (i.e., inductive or capacitive) while keeping the real part small. In the transverse direction, a carefully designed variation of the cross section of a beam pipe could generate negative impedance that would partially compensate the transverse impedance in one plane.

  14. Engineering scale electrostatic enclosure demonstration

    SciTech Connect

    Meyer, L.C.

    1993-09-01

    This report presents results from an engineering scale electrostatic enclosure demonstration test. The electrostatic enclosure is part of an overall in-depth contamination control strategy for transuranic (TRU) waste recovery operations. TRU contaminants include small particles of plutonium compounds associated with defense-related waste recovery operations. Demonstration test items consisted of an outer Perma-con enclosure, an inner tent enclosure, and a ventilation system test section for testing electrostatic curtain devices. Three interchangeable test fixtures that could remove plutonium from the contaminated dust were tested in the test section. These were an electret filter, a CRT as an electrostatic field source, and an electrically charged parallel plate separator. Enclosure materials tested included polyethylene, anti-static construction fabric, and stainless steel. The soil size distribution was determined using an eight stage cascade impactor. Photographs of particles containing plutonium were obtained with a scanning electron microscope (SEM). The SEM also provided a second method of getting the size distribution. The amount of plutonium removed from the aerosol by the electrostatic devices was determined by radiochemistry from input and output aerosol samplers. The inner and outer enclosures performed adequately for plutonium handling operations and could be used for full scale operations.

  15. The Kirby-Desai Scale

    PubMed Central

    Desai, Alpesh; Desai, Tejas; Kartono, Francisca; Geeta, Patel

    2009-01-01

    Background: As tattoos have become increasingly popular in the Western world, tattoo-removal requests have also increased, as patients’ personal identities advance. Laser tattoo removal is the current treatment of choice given its safety and efficacy. However, due to varying types of tattoos, it has been difficult to quantify the number of laser treatments required with certainty when discussing laser tattoo removal with our patients. Objective: To propose a practical numerical scale to assess the number of laser tattoo-removal treatments necessary to achieve satisfactory results. Methods and materials: A retrospective chart review was performed on 100 clinic patients who presented for laser tattoo removal. An algorithm was proposed to assign a numerical score to each tattoo across six different categories (skin type, location, color, amount of ink, scarring, and layering). The cumulative score (Kirby-Desai score) is proposed to correlate with the number of treatment sessions required for satisfactory tattoo removal. Results: A correlation coefficient of 0.757 was achieved, with satisfactory tattoo removal in all subjects (N=100, p<0.001). Conclusion: We propose the Kirby-Desai scale as a practical tool to assess the number of laser tattoo-removal sessions required, which will translate into a more certain cost calculation for the patient. PMID:20729941

  16. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1931-01-01

    Wing and nacelle set-up in Full-Scale Tunnel (FST). The NACA conducted drag tests in 1931 on a P3M-1 nacelle which were presented in a special report to the Navy. Smith DeFrance described this work in the report's introduction: 'Tests were conducted in the full-scale wind tunnel on a five to four geared Pratt and Whitney Wasp engine mounted in a P3M-1 nacelle. In order to simulate the flight conditions the nacelle was assembled on a 15-foot span of wing from the same airplane. The purpose of the tests was to improve the cooling of the engine and to reduce the drag of the nacelle combination. Thermocouples were installed at various points on the cylinders and temperature readings were obtained from these by the power plants division. These results will be reported in a memorandum by that division. The drag results, which are covered by this memorandum, were obtained with the original nacelle condition as received from the Navy with the tail of the nacelle modified, with the nose section of the nacelle modified, with a Curtiss anti-drag ring attached to the engine, with a Type G ring developed by the N.A.C.A., and with a Type D cowling which was also developed by the N.A.C.A.' (p. 1)

  17. Scaling analysis of stock markets.

    PubMed

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis. PMID:24985421

  18. Scaling analysis of stock markets

    NASA Astrophysics Data System (ADS)

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.

  19. The weak scale from BBN

    NASA Astrophysics Data System (ADS)

    Hall, Lawrence J.; Pinner, David; Ruderman, Joshua T.

    2014-12-01

    The measured values of the weak scale, v, and the first generation masses, m u, d, e , are simultaneously explained in the multiverse, with all these parameters scanning independently. At the same time, several remarkable coincidences are understood. Small variations in these parameters away from their measured values lead to the instability of hydrogen, the instability of heavy nuclei, and either a hydrogen or a helium dominated universe from Big Bang Nucleosynthesis. In the 4d parameter space of ( m u , m d , m e , v), catastrophic boundaries are reached by separately increasing each parameter above its measured value by a factor of (1.4, 1.3, 2.5, ˜ 5), respectively. The fine-tuning problem of the weak scale in the Standard Model is solved: as v is increased beyond the observed value, it is impossible to maintain a significant cosmological hydrogen abundance for any values of m u, d, e that yield both hydrogen and heavy nuclei stability.

  20. The Autonomy Over Smoking Scale.

    PubMed

    DiFranza, Joseph R; Wellman, Robert J; Ursprung, W W Sanouri A; Sabiston, Catherine

    2009-12-01

    Our goal was to create an instrument that can be used to study how smokers lose autonomy over smoking and regain it after quitting. The Autonomy Over Smoking Scale was produced through a process involving item generation, focus-group evaluation, testing in adults to winnow items, field testing with adults and adolescents, and head-to-head comparisons with other measures. The final 12-item scale shows excellent reliability (alphas = .91-.97), with a one-factor solution explaining 59% of the variance in adults and 61%-74% of the variance in adolescents. Concurrent validity was supported by associations with age of smoking initiation, lifetime use, smoking frequency, daily cigarette consumption, history of failed cessation, Hooked on Nicotine Checklist scores, and Diagnostic and Statistical Manual of Mental Disorder (4th ed., text rev.; American Psychiatric Association, 2000) nicotine dependence criteria. Potentially useful features of this new instrument include (a) it assesses tobacco withdrawal, cue-induced craving, and psychological dependence on cigarettes; (b) it measures symptom intensity; and (c) it asks about current symptoms only, so it could be administered to quitting smokers to track the resolution of symptoms.

  1. Scaling characteristics of topographic depressions

    NASA Astrophysics Data System (ADS)

    Le, P. V.; Kumar, P.

    2013-12-01

    Topographic depressions, areas of no lateral surface flow, are ubiquitous characteristic of land surface that control many ecosystem and biogeochemical processes. Landscapes with high density of depressions increase the surface storage capacity, whereas lower depression density increase runoff, thus influencing soil moisture states, hydrologic connectivity and the climate--soil--vegetation interactions. With the widespread availability of high resolution LiDAR based digital elevation model (lDEM) data, it is now possible to identify and characterize the structure of the spatial distribution of topographic depressions for incorporation in ecohydrologic and biogeochemical studies. Here we use lDEM data to document the prevalence and patterns of topographic depressions across five different landscapes in the United States and quantitatively characterize the distribution of attributes, such as surface area, storage volume, and the distance to the nearest neighbor. Through the use of a depression identification algorithm, we show that these distribution attributes follow scaling laws indicative of a fractal structure in which a large fraction of land surface areas can consist of high number of topographic depressions, accounting for 4 to 200 mm of depression storage. This implies that the impacts of small-scale topographic depressions in the fractal landscapes on the redistribution of surface energy fluxes, evaporation, and hydrologic connectivity are quite significant.

  2. Scaling device for photographic images

    NASA Technical Reports Server (NTRS)

    Rivera, Jorge E. (Inventor); Youngquist, Robert C. (Inventor); Cox, Robert B. (Inventor); Haskell, William D. (Inventor); Stevenson, Charles G. (Inventor)

    2005-01-01

    A scaling device projects a known optical pattern into the field of view of a camera, which can be employed as a reference scale in a resulting photograph of a remote object, for example. The device comprises an optical beam projector that projects two or more spaced, parallel optical beams onto a surface of a remotely located object to be photographed. The resulting beam spots or lines on the object are spaced from one another by a known, predetermined distance. As a result, the size of other objects or features in the photograph can be determined through comparison of their size to the known distance between the beam spots. Preferably, the device is a small, battery-powered device that can be attached to a camera and employs one or more laser light sources and associated optics to generate the parallel light beams. In a first embodiment of the invention, a single laser light source is employed, but multiple parallel beams are generated thereby through use of beam splitting optics. In another embodiment, multiple individual laser light sources are employed that are mounted in the device parallel to one another to generate the multiple parallel beams.

  3. The SCALE-UP Project

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    2015-03-01

    The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).

  4. Identification of Response Options to Artisanal and Small-Scale Gold Mining (ASGM) in Ghana via the Delphi Process

    PubMed Central

    Basu, Avik; Phipps, Sean; Long, Rachel; Essegbey, George; Basu, Niladri

    2015-01-01

    The Delphi technique is a means of facilitating discussion among experts in order to develop consensus, and can be used for policy formulation. This article describes a modified Delphi approach in which 27 multi-disciplinary academics and 22 stakeholders from Ghana and North America were polled about ways to address negative effects of small-scale gold mining (ASGM) in Ghana. In early 2014, the academics, working in disciplinary groups, synthesized 17 response options based on data aggregated during an Integrated Assessment of ASGM in Ghana. The researchers participated in two rounds of Delphi polling in March and April 2014, during which 17 options were condensed into 12. Response options were rated via a 4-point Likert scale in terms of benefit (economic, environmental, and benefit to people) and feasibility (economic, social/cultural, political, and implementation). The six highest-scoring options populated a third Delphi poll, which 22 stakeholders from diverse sectors completed in April 2015. The academics and stakeholders also prioritized the response options using ranking exercises. The technique successfully gauged expert opinion on ASGM, and helped identify potential responses, policies and solutions for the sector. This is timely given that improvement to the ASGM sector is an important component within the UN Minamata Convention. PMID:26378557

  5. Identification of Response Options to Artisanal and Small-Scale Gold Mining (ASGM) in Ghana via the Delphi Process.

    PubMed

    Basu, Avik; Phipps, Sean; Long, Rachel; Essegbey, George; Basu, Niladri

    2015-09-10

    The Delphi technique is a means of facilitating discussion among experts in order to develop consensus, and can be used for policy formulation. This article describes a modified Delphi approach in which 27 multi-disciplinary academics and 22 stakeholders from Ghana and North America were polled about ways to address negative effects of small-scale gold mining (ASGM) in Ghana. In early 2014, the academics, working in disciplinary groups, synthesized 17 response options based on data aggregated during an Integrated Assessment of ASGM in Ghana. The researchers participated in two rounds of Delphi polling in March and April 2014, during which 17 options were condensed into 12. Response options were rated via a 4-point Likert scale in terms of benefit (economic, environmental, and benefit to people) and feasibility (economic, social/cultural, political, and implementation). The six highest-scoring options populated a third Delphi poll, which 22 stakeholders from diverse sectors completed in April 2015. The academics and stakeholders also prioritized the response options using ranking exercises. The technique successfully gauged expert opinion on ASGM, and helped identify potential responses, policies and solutions for the sector. This is timely given that improvement to the ASGM sector is an important component within the UN Minamata Convention.

  6. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  7. SCALE-UP OF RAPID SMALL-SCALE ADSORPTION TESTS TO FIELD-SCALE ADSORBERS: THEORETICAL AND EXPERIMENTAL BASIS

    EPA Science Inventory

    Design of full-scale adsorption systems typically includes expensive and time-consuming pilot studies to simulate full-scale adsorber performance. Accordingly, the rapid small-scale column test (RSSCT) was developed and evaluated experimentally. The RSSCT can simulate months of f...

  8. The Adaptive Multi-scale Simulation Infrastructure

    SciTech Connect

    Tobin, William R.

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  9. A Three Component Cancer Attitude Scale.

    ERIC Educational Resources Information Center

    Torabi, Mohammad R.; Seffrin, John R.

    1986-01-01

    A scale was developed to measure college students' attitudes toward cancer and cancer prevention. The three components of attitude were feeling (affective), belief (cognitive), and intention to act (conative). Development of the scale is discussed. (DF)

  10. Scaling ansatz for the jamming transition.

    PubMed

    Goodrich, Carl P; Liu, Andrea J; Sethna, James P

    2016-08-30

    We propose a Widom-like scaling ansatz for the critical jamming transition. Our ansatz for the elastic energy shows that the scaling of the energy, compressive strain, shear strain, system size, pressure, shear stress, bulk modulus, and shear modulus are all related to each other via scaling relations, with only three independent scaling exponents. We extract the values of these exponents from already known numerical or theoretical results, and we numerically verify the resulting predictions of the scaling theory for the energy and residual shear stress. We also derive a scaling relation between pressure and residual shear stress that yields insight into why the shear and bulk moduli scale differently. Our theory shows that the jamming transition exhibits an emergent scale invariance, setting the stage for the potential development of a renormalization group theory for jamming. PMID:27512041

  11. Scaling ansatz for the jamming transition

    NASA Astrophysics Data System (ADS)

    Goodrich, Carl P.; Liu, Andrea J.; Sethna, James P.

    2016-08-01

    We propose a Widom-like scaling ansatz for the critical jamming transition. Our ansatz for the elastic energy shows that the scaling of the energy, compressive strain, shear strain, system size, pressure, shear stress, bulk modulus, and shear modulus are all related to each other via scaling relations, with only three independent scaling exponents. We extract the values of these exponents from already known numerical or theoretical results, and we numerically verify the resulting predictions of the scaling theory for the energy and residual shear stress. We also derive a scaling relation between pressure and residual shear stress that yields insight into why the shear and bulk moduli scale differently. Our theory shows that the jamming transition exhibits an emergent scale invariance, setting the stage for the potential development of a renormalization group theory for jamming.

  12. Small-Scale Rocket Motor Test

    NASA Video Gallery

    Engineers at NASA's Marshall Space Flight Center in Huntsville, Ala. successfully tested a sub-scale solid rocket motor on May 27. Testing a sub-scale version of a rocket motor is a cost-effective ...

  13. Scaling ansatz for the jamming transition

    PubMed Central

    Goodrich, Carl P.; Liu, Andrea J.; Sethna, James P.

    2016-01-01

    We propose a Widom-like scaling ansatz for the critical jamming transition. Our ansatz for the elastic energy shows that the scaling of the energy, compressive strain, shear strain, system size, pressure, shear stress, bulk modulus, and shear modulus are all related to each other via scaling relations, with only three independent scaling exponents. We extract the values of these exponents from already known numerical or theoretical results, and we numerically verify the resulting predictions of the scaling theory for the energy and residual shear stress. We also derive a scaling relation between pressure and residual shear stress that yields insight into why the shear and bulk moduli scale differently. Our theory shows that the jamming transition exhibits an emergent scale invariance, setting the stage for the potential development of a renormalization group theory for jamming. PMID:27512041

  14. The AppScale Cloud Platform

    PubMed Central

    Krintz, Chandra

    2013-01-01

    AppScale is an open source distributed software system that implements a cloud platform as a service (PaaS). AppScale makes cloud applications easy to deploy and scale over disparate cloud fabrics, implementing a set of APIs and architecture that also makes apps portable across the services they employ. AppScale is API-compatible with Google App Engine (GAE) and thus executes GAE applications on-premise or over other cloud infrastructures, without modification. PMID:23828721

  15. Scale Interaction in a California precipitation event

    SciTech Connect

    Leach, M. J., LLNL

    1997-09-01

    Heavy rains and severe flooding frequently plaque California. The heavy rains are most often associated with large scale cyclonic and frontal systems, where large scale dynamics and large moisture influx from the tropical Pacific interact. however, the complex topography along the west coast also interacts with the large scale influences, producing local areas with heavier precipitation. In this paper, we look at some of the local interactions with the large scale.

  16. Strongly scale-dependent non-Gaussianity

    SciTech Connect

    Riotto, Antonio; Sloth, Martin S.

    2011-02-15

    We discuss models of primordial density perturbations where the non-Gaussianity is strongly scale dependent. In particular, the non-Gaussianity may have a sharp cutoff and be very suppressed on large cosmological scales, but sizable on small scales. This may have an impact on probes of non-Gaussianity in the large-scale structure and in the cosmic microwave background radiation anisotropies.

  17. The Scaled Thermal Explosion Experiment

    SciTech Connect

    Wardell, J F; Maienschein, J L

    2002-07-05

    We have developed the Scaled Thermal Explosion Experiment (STEX) to provide a database of reaction violence from thermal explosion for explosives of interest. Such data are needed to develop, calibrate, and validate predictive capability for thermal explosions using simulation computer codes. A cylinder of explosive 25, 50 or 100 mm in diameter, is confined in a steel cylinder with heavy end caps, and heated under controlled conditions until reaction. Reaction violence is quantified through non-contact micropower impulse radar measurements of the cylinder wall velocity and by strain gauge data at reaction onset. Here we describe the test concept, design and diagnostic recording, and report results with HMX- and RDX-based energetic materials.

  18. Hypoallometric scaling in international collaborations

    NASA Astrophysics Data System (ADS)

    Hsiehchen, David; Espinoza, Magdalena; Hsieh, Antony

    2016-02-01

    Collaboration is a vital process and dominant theme in knowledge production, although the effectiveness of policies directed at promoting multinational research remains ambiguous. We examined approximately 24 million research articles published over four decades and demonstrated that the scaling of international publications to research productivity for each country obeys a universal and conserved sublinear power law. Inefficient mechanisms in transborder team dynamics or organization as well as increasing opportunity costs may contribute to the disproportionate growth of international collaboration rates with increasing productivity among nations. Given the constrained growth of international relationships, our findings advocate a greater emphasis on the qualitative aspects of collaborations, such as with whom partnerships are forged, particularly when assessing research and policy outcomes.

  19. Enabling department-scale supercomputing

    SciTech Connect

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  20. Bacterial Communities: Interactions to Scale.

    PubMed

    Stubbendieck, Reed M; Vargas-Bautista, Carol; Straight, Paul D

    2016-01-01

    In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities. PMID:27551280

  1. Modeling biosilicification at subcellular scales.

    PubMed

    Javaheri, Narjes; Cronemberger, Carolina M; Kaandorp, Jaap A

    2013-01-01

    Biosilicification occurs in many organisms. Sponges and diatoms are major examples of them. In this chapter, we introduce a modeling approach that describes several biological mechanisms controlling silicification. Modeling biosilicification is a typical multiscale problem where processes at very different temporal and spatial scales need to be coupled: processes at the molecular level, physiological processes at the subcellular and cellular level, etc. In biosilicification morphology plays a fundamental role, and a spatiotemporal model is required. In the case of sponges, a particle simulation based on diffusion-limited aggregation is presented here. This model can describe fractal properties of silica aggregates in first steps of deposition on an organic template. In the case of diatoms, a reaction-diffusion model is introduced which can describe the concentrations of chemical components and has the possibility to include polymerization chain of reactions. PMID:24420712

  2. Small Scale High Speed Turbomachinery

    NASA Technical Reports Server (NTRS)

    London, Adam P. (Inventor); Droppers, Lloyd J. (Inventor); Lehman, Matthew K. (Inventor); Mehra, Amitav (Inventor)

    2015-01-01

    A small scale, high speed turbomachine is described, as well as a process for manufacturing the turbomachine. The turbomachine is manufactured by diffusion bonding stacked sheets of metal foil, each of which has been pre-formed to correspond to a cross section of the turbomachine structure. The turbomachines include rotating elements as well as static structures. Using this process, turbomachines may be manufactured with rotating elements that have outer diameters of less than four inches in size, and/or blading heights of less than 0.1 inches. The rotating elements of the turbomachines are capable of rotating at speeds in excess of 150 feet per second. In addition, cooling features may be added internally to blading to facilitate cooling in high temperature operations.

  3. Anisotropic scaling of magnetohydrodynamic turbulence.

    PubMed

    Horbury, Timothy S; Forman, Miriam; Oughton, Sean

    2008-10-24

    We present a quantitative estimate of the anisotropic power and scaling of magnetic field fluctuations in inertial range magnetohydrodynamic turbulence, using a novel wavelet technique applied to spacecraft measurements in the solar wind. We show for the first time that, when the local magnetic field direction is parallel to the flow, the spacecraft-frame spectrum has a spectral index near 2. This can be interpreted as the signature of a population of fluctuations in field-parallel wave numbers with a k(-2)_(||) spectrum but is also consistent with the presence of a "critical balance" style turbulent cascade. We also find, in common with previous studies, that most of the power is contained in wave vectors at large angles to the local magnetic field and that this component of the turbulence has a spectral index of 5/3.

  4. Size Scaling of Static Friction

    NASA Astrophysics Data System (ADS)

    Braun, O. M.; Manini, Nicola; Tosatti, Erio

    2013-02-01

    Sliding friction across a thin soft lubricant film typically occurs by stick slip, the lubricant fully solidifying at stick, yielding and flowing at slip. The static friction force per unit area preceding slip is known from molecular dynamics (MD) simulations to decrease with increasing contact area. That makes the large-size fate of stick slip unclear and unknown; its possible vanishing is important as it would herald smooth sliding with a dramatic drop of kinetic friction at large size. Here we formulate a scaling law of the static friction force, which for a soft lubricant is predicted to decrease as fm+Δf/Aγ for increasing contact area A, with γ>0. Our main finding is that the value of fm, controlling the survival of stick slip at large size, can be evaluated by simulations of comparably small size. MD simulations of soft lubricant sliding are presented, which verify this theory.

  5. Scale control in urea solutions

    SciTech Connect

    Dubin, L.; Diep, D.V.

    1997-08-01

    Legislation to control NO{sub x} emissions, one cause of acid rain and ozone induced smog, has created an impetus to control NO{sub x} emissions. Selective Non Catalytic Reduction (SNCR) using urea chemistry is utilized to control NO{sub x} emissions from boilers, municipal waste incinerators, refinery furnaces, recovery boilers, utilities and other stationary combustion sources. Control requires injecting urea-based solutions into the flue gas at specified temperatures. Urea solutions accelerate CaCO{sub 3} precipitation in industrial waters used for dilution, and thereby interfere with proper application of the urea solution. The negative effect of urea solutions on hardness stability is discussed as well as how CaCO{sub 3} precipitation in urea solution can be controlled by suitable scale inhibitors.

  6. Bacterial Communities: Interactions to Scale

    PubMed Central

    Stubbendieck, Reed M.; Vargas-Bautista, Carol; Straight, Paul D.

    2016-01-01

    In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities. PMID:27551280

  7. Quantitative Scaling of Magnetic Avalanches.

    PubMed

    Durin, G; Bohn, F; Corrêa, M A; Sommer, R L; Le Doussal, P; Wiese, K J

    2016-08-19

    We provide the first quantitative comparison between Barkhausen noise experiments and recent predictions from the theory of avalanches for pinned interfaces, both in and beyond mean field. We study different classes of soft magnetic materials, including polycrystals and amorphous samples-which are characterized by long-range and short-range elasticity, respectively-both for thick and thin samples, i.e., with and without eddy currents. The temporal avalanche shape at fixed size as well as observables related to the joint distribution of sizes and durations are analyzed in detail. Both long-range and short-range samples with no eddy currents are fitted extremely well by the theoretical predictions. In particular, the short-range samples provide the first reliable test of the theory beyond mean field. The thick samples show systematic deviations from the scaling theory, providing unambiguous signatures for the presence of eddy currents.

  8. Quantitative Scaling of Magnetic Avalanches

    NASA Astrophysics Data System (ADS)

    Durin, G.; Bohn, F.; Corrêa, M. A.; Sommer, R. L.; Le Doussal, P.; Wiese, K. J.

    2016-08-01

    We provide the first quantitative comparison between Barkhausen noise experiments and recent predictions from the theory of avalanches for pinned interfaces, both in and beyond mean field. We study different classes of soft magnetic materials, including polycrystals and amorphous samples—which are characterized by long-range and short-range elasticity, respectively—both for thick and thin samples, i.e., with and without eddy currents. The temporal avalanche shape at fixed size as well as observables related to the joint distribution of sizes and durations are analyzed in detail. Both long-range and short-range samples with no eddy currents are fitted extremely well by the theoretical predictions. In particular, the short-range samples provide the first reliable test of the theory beyond mean field. The thick samples show systematic deviations from the scaling theory, providing unambiguous signatures for the presence of eddy currents.

  9. Chip Scale Package Implementation Challenges

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    1998-01-01

    The JPL-led MicrotypeBGA Consortium of enterprises representing government agencies and private companies have jointed together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects. In the process of building the Consortium CSP test vehicles, many challenges were identified regarding various aspects of technology implementation. This paper will present our experience in the areas of technology implementation challenges, including design and building both standard and microvia boards, and assembly of two types of test vehicles. We also discuss the most current package isothermal aging to 2,000 hours at 100 C and 125 C and thermal cycling test results to 1,700 cycles in the range of -30 to 100 C.

  10. Quantitative Scaling of Magnetic Avalanches.

    PubMed

    Durin, G; Bohn, F; Corrêa, M A; Sommer, R L; Le Doussal, P; Wiese, K J

    2016-08-19

    We provide the first quantitative comparison between Barkhausen noise experiments and recent predictions from the theory of avalanches for pinned interfaces, both in and beyond mean field. We study different classes of soft magnetic materials, including polycrystals and amorphous samples-which are characterized by long-range and short-range elasticity, respectively-both for thick and thin samples, i.e., with and without eddy currents. The temporal avalanche shape at fixed size as well as observables related to the joint distribution of sizes and durations are analyzed in detail. Both long-range and short-range samples with no eddy currents are fitted extremely well by the theoretical predictions. In particular, the short-range samples provide the first reliable test of the theory beyond mean field. The thick samples show systematic deviations from the scaling theory, providing unambiguous signatures for the presence of eddy currents. PMID:27588876

  11. Quasistatic scale-free networks

    NASA Astrophysics Data System (ADS)

    Mukherjee, G.; Manna, S. S.

    2003-01-01

    A network is formed using the N sites of a one-dimensional lattice in the shape of a ring as nodes and each node with the initial degree kin=2. N links are then introduced to this network, each link starts from a distinct node, the other end being connected to any other node with degree k randomly selected with an attachment probability proportional to kα. Tuning the control parameter α, we observe a transition where the average degree of the largest node changes its variation from N0 to N at a specific transition point of αc. The network is scale free, i.e., the nodal degree distribution has a power law decay for α⩾αc.

  12. L-scaling. Working Paper No. 26.

    ERIC Educational Resources Information Center

    Blankmeyer, Eric

    Given "T" joint observations on "K" variables, it is frequently useful to consider the weighted average or scaled score. L-scaling is introduced as a technique for determining the weights. The technique is so named because of its resemblance to the Leontief matrix of mathematical economics. L-scaling is compared to two widely-used procedures for…

  13. 21 CFR 880.2720 - Patient scale.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... transducers, electronic signal amplification, conditioning and display equipment. (b) Classification. Class I... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is...

  14. 27 CFR 19.183 - Scale tanks.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Tank Requirements § 19.183 Scale tanks. (a) Except as otherwise provided in paragraph (b) of this... quickly and accurately determined. (b) The requirement to mount tanks on scales does not apply to tanks... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Scale tanks....

  15. 27 CFR 19.183 - Scale tanks.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Tank Requirements § 19.183 Scale tanks. (a) Except as otherwise provided in paragraph (b) of this... quickly and accurately determined. (b) The requirement to mount tanks on scales does not apply to tanks... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Scale tanks....

  16. 21 CFR 880.2720 - Patient scale.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... transducers, electronic signal amplification, conditioning and display equipment. (b) Classification. Class I... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is...

  17. 27 CFR 19.183 - Scale tanks.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Tank Requirements § 19.183 Scale tanks. (a) Except as otherwise provided in paragraph (b) of this... quickly and accurately determined. (b) The requirement to mount tanks on scales does not apply to tanks... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Scale tanks....

  18. 21 CFR 880.2720 - Patient scale.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... transducers, electronic signal amplification, conditioning and display equipment. (b) Classification. Class I... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is...

  19. 27 CFR 19.183 - Scale tanks.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Tank Requirements § 19.183 Scale tanks. (a) Except as otherwise provided in paragraph (b) of this... quickly and accurately determined. (b) The requirement to mount tanks on scales does not apply to tanks... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Scale tanks....

  20. Researching Developmental Careers: The Career Conformity Scale.

    ERIC Educational Resources Information Center

    White, James M.

    1987-01-01

    Developed interval scale to measure deviation from the normative sequencing of first job, marriage, and birth of first child. Scale scores had predictive utility for marital instability and work interruptions. Use of scale score as independent variable provided more information than that provided by Hogan's (1978) temporal sequence categories as…

  1. Behavioral Observation Scales for Performance Appraisal Purposes

    ERIC Educational Resources Information Center

    Latham, Gary P.; Wexley, Kenneth N.

    1977-01-01

    This research attempts to determine whether Behavioral Observation Scales (BOS) could be improved by developing them through quantitative methods. The underlying assumption was that developing composite scales with greater internal consistency might improve their generalizability as evidenced by the cross-validation coefficients of scales based on…

  2. Automatic scale selection for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Bayram, Ersin; Wyatt, Christopher L.; Ge, Yaorong

    2001-07-01

    The scale of interesting structures in medical images is space variant because of partial volume effects, spatial dependence of resolution in many imaging modalities, and differences in tissue properties. Existing segmentation methods either apply a single scale to the entire image or try fine-to-coarse/coarse-to-fine tracking of structures over multiple scales. While single scale approaches fail to fully recover the perceptually important structures, multi-scale methods have problems in providing reliable means to select proper scales and integrating information over multiple scales. A recent approach proposed by Elder and Zucker addresses the scale selection problem by computing a minimal reliable scale for each image pixel. The basic premise of this approach is that, while the scale of structures within an image vary spatially, the imaging system is fixed. Hence, sensor noise statistics can be calculated. Based on a model of edges to be detected, and operators to be used for detection, one can locally compute a unique minimal reliable scale at which the likelihood of error due to sensor noise is less than or equal to a predetermined threshold. In this paper, we improve the segmentation method based on the minimal reliable scale selection and evaluate its effectiveness with both simulated and actual medical data.

  3. 76 FR 18348 - Required Scale Tests

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-04

    ... Register on January 20, 2011 (76 FR 3485), defining required scale tests. That document incorrectly defined... Tests AGENCY: Grain Inspection, Packers and Stockyards Administration. ACTION: Correcting amendments... packer using such scales may use the scales within a 6-month period following each test. * * * * * Alan...

  4. The Attitudes toward Multiracial Children Scale.

    ERIC Educational Resources Information Center

    Jackman, Charmain F.; Wagner, William G.; Johnson, J. T.

    2001-01-01

    Two studies evaluated items developed for the Attitudes Toward Multiracial Children Scale. Researchers administered the scale to diverse college students, revised it, then administered it again. The scale's psychometric properties were such that the instrument could be used to research adults' attitudes regarding psychosocial development of…

  5. Scale and corrosion inhibition by thermal polyaspartates

    SciTech Connect

    Bains, D.I.; Fan, G.; Fan, J.; Ross, R.J.

    1999-11-01

    Organic polymers have found wide spread use as inhibitors for the prevention of mineral scales in heat transfer equipment. Recently a biodegradable organic polymer has been developed which provides both scale and corrosion control. The development of the polymeric inhibitor and laboratory evaluations of scale and corrosion inhibition is discussed together with its potential application in open recirculating cooling systems.

  6. An Aesthetic Value Scale of the Rorschach.

    ERIC Educational Resources Information Center

    Insua, Ana Maria

    1981-01-01

    An aesthetic value scale of the Rorschach cards was built by the successive interval method. This scale was compared with the ratings obtained by means of the Semantic Differential Scales and was found to successfully differentiate sexes in their judgment of card attractiveness. (Author)

  7. Development of Capstone Project Attitude Scales

    ERIC Educational Resources Information Center

    Bringula, Rex P.

    2015-01-01

    This study attempted to develop valid and reliable Capstone Project Attitude Scales (CPAS). Among the scales reviewed, the Modified Fennema-Shermann Mathematics Attitude Scales was adapted in the construction of the CPAS. Usefulness, Confidence, and Gender View were the three subscales of the CPAS. Four hundred sixty-three students answered the…

  8. Developing a Sense of Scale: Looking Backward

    ERIC Educational Resources Information Center

    Jones, M. Gail; Taylor, Amy R.

    2009-01-01

    Although scale has been identified as one of four major interdisciplinary themes that cut across the science domains by the American Association for the Advancement of Science (1989), we are only beginning to understand how students learn and apply scale concepts. Early research on learning scale tended to focus on perceptions of linear distances,…

  9. Why Online Education Will Attain Full Scale

    ERIC Educational Resources Information Center

    Sener, John

    2010-01-01

    Online higher education has attained scale and is poised to take the next step in its growth. Although significant obstacles to a full scale adoption of online education remain, we will see full scale adoption of online higher education within the next five to ten years. Practically all higher education students will experience online education in…

  10. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).

  11. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST): 120-Foot Truss hoisting, one and two point suspension. In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)

  12. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  13. Small-scale field experiments accurately scale up to predict density dependence in reef fish populations at large scales.

    PubMed

    Steele, Mark A; Forrester, Graham E

    2005-09-20

    Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats.

  14. Scaling regions for food web properties

    PubMed Central

    Bersier, Louis-Félix; Sugihara, George

    1997-01-01

    The robustness of eight common food web properties is examined with respect to web size. We show that the current controversy concerning the scale dependence or scale invariance of these properties can be resolved by accounting for scaling constraints introduced by webs of very small size. We demonstrate statistically that the most robust way to view these properties is not to lump webs of all sizes, but to divide them into two distinct categories. For the present data set, small webs containing 12 or fewer species exhibit scale dependence, and larger webs containing more than 12 species exhibit scale invariance. PMID:11038600

  15. Fluctuation scaling, Taylor's law, and crime.

    PubMed

    Hanley, Quentin S; Khatun, Suniya; Yosef, Amal; Dyer, Rachel-May

    2014-01-01

    Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026) while burglary exhibited a greater exponent (α = 1.292 ± 0.029) indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs) to 2.094 ± 0081 (Other Crimes). Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  16. Effects of scale on internal blast measurements

    NASA Astrophysics Data System (ADS)

    Granholm, R.; Sandusky, H.; Lee, R.

    2014-05-01

    This paper presents a comparative study between large and small-scale internal blast experiments with the goal of using the small-scale analog for energetic performance evaluation. In the small-scale experiment, highly confined explosive samples <0.5 g were subjected to the output from a PETN detonator while enclosed in a 3-liter chamber. Large-scale tests up to 23 kg were unconfined and released in a chamber with a factor of 60,000 increase in volume. The comparative metric in these experiments is peak quasi-static overpressure, with the explosive sample expressed as sample energy/chamber volume, which normalizes measured pressures across scale. Small-scale measured pressures were always lower than the large-scale measurements, because of heat-loss to the high confinement inherent in the small-scale apparatus. This heat-loss can be quantified and used to correct the small-scale pressure measurements. In some cases the heat-loss was large enough to quench reaction of lower energy samples. These results suggest that small-scale internal blast tests do correlate with their large-scale counterparts, provided that heat-loss to confinement can be measured, and that less reactive or lower energy samples are not quenched by heat-loss.

  17. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  18. Mechanisms of scaling in pattern formation

    PubMed Central

    Umulis, David M.; Othmer, Hans G.

    2013-01-01

    Many organisms and their constituent tissues and organs vary substantially in size but differ little in morphology; they appear to be scaled versions of a common template or pattern. Such scaling involves adjusting the intrinsic scale of spatial patterns of gene expression that are set up during development to the size of the system. Identifying the mechanisms that regulate scaling of patterns at the tissue, organ and organism level during development is a longstanding challenge in biology, but recent molecular-level data and mathematical modeling have shed light on scaling mechanisms in several systems, including Drosophila and Xenopus. Here, we investigate the underlying principles needed for understanding the mechanisms that can produce scale invariance in spatial pattern formation and discuss examples of systems that scale during development. PMID:24301464

  19. The development of a forgiveness scale.

    PubMed

    Hargrave, T D; Sells, J N

    1997-01-01

    This paper reports on the development, validity, and reliability of a self-report instrument designed to assess a respondent's perspective of pain resulting from relational violations and work toward relational forgiveness based on a framework proposed by Hargrave (1994a). Presented here is the five-stage procedure used in the development of the Interpersonal Relationship Resolution Scale. Construct validity and reliability were determined from an initial sample of 164 subjects. Concurrent validity of the scale was supported by another sample of 35 respondents who took the Interpersonal Relationship Resolution Scale, the Personal Authority in the Family System Questionnaire, the Relational Ethics Scale, the Fundamental Interpersonal Relations Orientation-Behavior scale, and the Burns Depression Checklist. Finally, a predictive validity study of the scale was performed with a clinical and nonclinical sample of 98 volunteers. Data are presented that support the validity and reliability of the instrument, as well as the final version of the scale.

  20. Invariant relationships deriving from classical scaling transformations

    SciTech Connect

    Bludman, Sidney; Kennedy, Dallas C.

    2011-04-15

    Because scaling symmetries of the Euler-Lagrange equations are generally not variational symmetries of the action, they do not lead to conservation laws. Instead, an extension of Noether's theorem reduces the equations of motion to evolutionary laws that prove useful, even if the transformations are not symmetries of the equations of motion. In the case of scaling, symmetry leads to a scaling evolutionary law, a first-order equation in terms of scale invariants, linearly relating kinematic and dynamic degrees of freedom. This scaling evolutionary law appears in dynamical and in static systems. Applied to dynamical central-force systems, the scaling evolutionary equation leads to generalized virial laws, which linearly connect the kinetic and potential energies. Applied to barotropic hydrostatic spheres, the scaling evolutionary equation linearly connects the gravitational and internal energy densities. This implies well-known properties of polytropes, describing degenerate stars and chemically homogeneous nondegenerate stellar cores.

  1. Collaboration and nested environmental governance: Scale dependency, scale framing, and cross-scale interactions in collaborative conservation.

    PubMed

    Wyborn, Carina; Bixler, R Patrick

    2013-07-15

    The problem of fit between social institutions and ecological systems is an enduring challenge in natural resource management and conservation. Developments in the science of conservation biology encourage the management of landscapes at increasingly larger scales. In contrast, sociological approaches to conservation emphasize the importance of ownership, collaboration and stewardship at scales relevant to the individual or local community. Despite the proliferation of initiatives seeking to work with local communities to undertake conservation across large landscapes, there is an inherent tension between these scales of operation. Consequently, questions about the changing nature of effective conservation across scales abound. Through an analysis of three nested cases working in a semiautonomous fashion in the Northern Rocky Mountains in North America, this paper makes an empirical contribution to the literature on nested governance, collaboration and communication across scales. Despite different scales of operation, constituencies and scale frames, we demonstrate a surprising similarity in organizational structure and an implicit dependency between these initiatives. This paper examines the different capacities and capabilities of collaborative conservation from the local to regional to supra regional. We draw on the underexplored concept of 'scale-dependent comparative advantage' (Cash and Moser, 2000), to gain insight into what activities take place at which scale and what those activities contribute to nested governance and collaborative conservation. The comparison of these semiautonomous cases provides fruitful territory to draw lessons for understanding the roles and relationships of organizations operating at different scales in more connected networks of nested governance.

  2. Anticipated adaptation or scale recalibration?

    PubMed Central

    2013-01-01

    Background The aim of our study was to investigate anticipated adaptation among patients in the subacute phase of Spinal Cord Injury (SCI). Methods We used an observational longitudinal design. Patients with SCI (N = 44) rated their actual, previous and expected future Quality of Life (QoL) at three time points: within two weeks of admission to the rehabilitation center (RC), a few weeks before discharge from the RC, and at least three months after discharge. We compared the expected future rating at the second time point with the actual ratings at the third time point, using student’s t-tests. To gain insight into scale recalibration we also compared actual and previous ratings. Results At the group level, patients overpredicted their improvement on the VAS. Actual health at T3(M = 0.65, sd =0.20)) was significantly lower than the predicted health at T1 of T3 (M = 0.76, sd = 0.1; t(43) = 3.24, p < 0.01), and at T2 of T3(M = 0.75,sd = 0.13; t(43) = 3.44, p < 0.001). Similarly the recalled health at T3 of T2 (M = 0.59, sd = 0.18) was significantly lower than the actual health at T2 (M = 0.67, sd = 0.15; t(43) = 3.26, p <0.01). Patients rated their future and past health inaccurately compared to their actual ratings on the VAS. In contrast, on the TTO patients gave accurate estimates of their future and previous health, and they also accurately valued their previous health. Looking at individual ratings, the number of respondents with accurate estimates of their future and previous health were similar between the VAS and TTO. However, the Bland-Altman plots show that the deviation of the accuracy is larger for the TTO then the VAS. That is the accuracy of 95% of the respondents was lower in the TTO then in the VAS. Conclusions Patients at the onset of a disability were able to anticipate adaptation. Valuations given on the VAS seem to be biased by scale recalibration. PMID:24139246

  3. Advances in time-scale algorithms

    NASA Technical Reports Server (NTRS)

    Stein, S. R.

    1993-01-01

    The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.

  4. Scales and scaling in turbulent ocean sciences; physics-biology coupling

    NASA Astrophysics Data System (ADS)

    Schmitt, Francois

    2015-04-01

    Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.

  5. Effect of Violating Unidimensional Item Response Theory Vertical Scaling Assumptions on Developmental Score Scales

    ERIC Educational Resources Information Center

    Topczewski, Anna Marie

    2013-01-01

    Developmental score scales represent the performance of students along a continuum, where as students learn more they move higher along that continuum. Unidimensional item response theory (UIRT) vertical scaling has become a commonly used method to create developmental score scales. Research has shown that UIRT vertical scaling methods can be…

  6. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  7. Mirages in galaxy scaling relations

    NASA Astrophysics Data System (ADS)

    Mosenkov, A. V.; Sotnikova, N. Ya.; Reshetnikov, V. P.

    2014-06-01

    We analysed several basic correlations between structural parameters of galaxies. The data were taken from various samples in different passbands which are available in the literature. We discuss disc scaling relations as well as some debatable issues concerning the so-called Photometric Plane for bulges and elliptical galaxies in different forms and various versions of the famous Kormendy relation. We show that some of the correlations under discussion are artificial (self-correlations), while others truly reveal some new essential details of the structural properties of galaxies. Our main results are as follows: At present, we cannot conclude that faint stellar discs are, on average, more thin than discs in high surface brightness galaxies. The `central surface brightness-thickness' correlation appears only as a consequence of the transparent exponential disc model to describe real galaxy discs. The Photometric Plane appears to have no independent physical sense. Various forms of this plane are merely sophisticated versions of the Kormendy relation or of the self-relation involving the central surface brightness of a bulge/elliptical galaxy and the Sérsic index n. The Kormendy relation is a physical correlation presumably reflecting the difference in the origin of bright and faint ellipticals and bulges. We present arguments that involve creating artificial samples to prove our main idea.

  8. Scaling in nonstationary voltammetry representations.

    PubMed

    Anastassiou, Costas A; Parker, Kim H; O'Hare, Danny

    2007-12-20

    Despite the widespread use of voltammetry for a range of chemical, biological, environmental, and industrial applications, there is still a lack of understanding regarding the functionality between the applied voltage and the resulting patterns in the current response. This is due to the highly nonlinear relation between the applied voltage and the nonstationary current response, which casts a direct association nonintuitive. In this Article, we focus on large-amplitude/high-frequency ac voltammetry, a technique that has shown to offer increased voltammetric detail compared to alternative methods, to study heterogeneous electrochemical reaction-diffusion cases using a nonstationary time-series analysis, the Hilbert transform, and symmetry considerations. We show that application of this signal processing technique minimizes the significant capacitance contribution associated with rapid voltammetric measurements. From a series of numerical simulations conducted for different voltage excitation parameters as well as kinetic, thermodynamic, and mass transport parameters, a number of scaling laws arise that are related to the underlying parameters/dynamics of the process. Under certain conditions, these observations allow the determination of all underlying parameters very rapidly, experiment duration typically

  9. Scaling laws for iceberg calving

    NASA Astrophysics Data System (ADS)

    Åström, Jan; Moore, John

    2014-05-01

    Over the next century, most additional ocean water will come from ice sheets and glaciers, primarily through calving of ice into the oceans. Calving fluxes are prone to rapid and non-linear variability and therefore have proven difficult to include in models forced by evolving climatic variables. Theoretical and simulation first-principles fracture models are applied to investigating iceberg calving. We demonstrate that calving originates from general behaviour of unstable cracks in elastic media. Cracks in ice trigger calving events that have a striking statistical similarity to avalanches in Abelian sand-pile models. That is, both calving mass distribution and inter-event waiting times are similar to those of sand-pile models. The theoretical results are confirmed by a first-principles simulation model and field observations spanning 12 orders of magnitude in calving size. This suggests that calving termini are self-organized critical systems, hence the difficulty to parameterize calving in large-scale models. Subtle deviation from a critical point towards higher stability will lead to subcritical calving - small and infrequent calving events associated with glacier advance, while subtle deviation towards higher instability will lead to supercritical calving - larger and more frequent events associated with rapid retreat. Such behaviour is consistent with recent worldwide observations of ice shelf disintegration and irreversible tidewater glacier retreat in response to climate warming.

  10. Scaling Agile Infrastructure to People

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  11. Electroweak-scale resonant leptogenesis

    SciTech Connect

    Pilaftsis, Apostolos; Underwood, Thomas E.J.

    2005-12-01

    We study minimal scenarios of resonant leptogenesis near the electroweak phase transition. These models offer a number of testable phenomenological signatures for low-energy experiments and future high-energy colliders. Our study extends previous analyses of the relevant network of Boltzmann equations, consistently taking into account effects from out of equilibrium sphalerons and single lepton flavors. We show that the effects from single lepton flavors become very important in variants of resonant leptogenesis, where the observed baryon asymmetry in the Universe is created by lepton-to-baryon conversion of an individual lepton number, for example, that of the {tau}-lepton. The predictions of such resonant {tau}-leptogenesis models for the final baryon asymmetry are almost independent of the initial lepton-number and heavy neutrino abundances. These models accommodate the current neutrino data and have a number of testable phenomenological implications. They contain electroweak-scale heavy Majorana neutrinos with appreciable couplings to electrons and muons, which can be probed at future e{sup +}e{sup -} and {mu}{sup +}{mu}{sup -} high-energy colliders. In particular, resonant {tau}-leptogenesis models predict sizable 0{nu}{beta}{beta} decay, as well as e- and {mu}-number-violating processes, such as {mu}{yields}e{gamma} and {mu}{yields}e conversion in nuclei, with rates that are within reach of the experiments proposed by the MEG and MECO collaborations.

  12. Challenges to Scaling CIGS Photovoltaics

    NASA Astrophysics Data System (ADS)

    Stanbery, B. J.

    2011-03-01

    The challenges of scaling any photovoltaic technology to terawatts of global capacity are arguably more economic than technological or resource constraints. All commercial thin-film PV technologies are based on direct bandgap semiconductors whose absorption coefficient and bandgap alignment with the solar spectrum enable micron-thick coatings in lieu to hundreds of microns required using indirect-bandgap c-Si. Although thin-film PV reduces semiconductor materials cost, its manufacture is more capital intensive than c-Si production, and proportional to deposition rate. Only when combined with sufficient efficiency and cost of capital does this tradeoff yield lower manufacturing cost. CIGS has the potential to become the first thin film technology to achieve the terawatt benchmark because of its superior conversion efficiency, making it the only commercial thin film technology which demonstrably delivers performance comparable to the dominant incumbent, c-Si. Since module performance leverages total systems cost, this competitive advantage bears directly on CIGS' potential to displace c-Si and attract the requisite capital to finance the tens of gigawatts of annual production capacity needed to manufacture terawatts of PV modules apace with global demand growth.

  13. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  14. Universal scaling in sports ranking

    NASA Astrophysics Data System (ADS)

    Deng, Weibing; Li, Wei; Cai, Xu; Bulou, Alain; Wang, Qiuping A.

    2012-09-01

    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters.

  15. Quantifying scale relationships in snow distributions

    NASA Astrophysics Data System (ADS)

    Deems, Jeffrey S.

    2007-12-01

    Spatial distributions of snow in mountain environments represent the time integration of accumulation and ablation processes, and are strongly and dynamically linked to mountain hydrologic, ecologic, and climatic systems. Accurate measurement and modeling of the spatial distribution and variability of the seasonal mountain snowpack at different scales are imperative for water supply and hydropower decision-making, for investigations of land-atmosphere interaction or biogeochemical cycling, and for accurate simulation of earth system processes and feedbacks. Assessment and prediction of snow distributions in complex terrain are heavily dependent on scale effects, as the pattern and magnitude of variability in snow distributions depends on the scale of observation. Measurement and model scales are usually different from process scales, and thereby introduce a scale bias to the estimate or prediction. To quantify this bias, or to properly design measurement schemes and model applications, the process scale must be known or estimated. Airborne Light Detection And Ranging (lidar) products provide high-resolution, broad-extent altimetry data for terrain and snowpack mapping, and allow an application of variogram fractal analysis techniques to characterize snow depth scaling properties over lag distances from 1 to 1000 meters. Snow depth patterns as measured by lidar at three Colorado mountain sites exhibit fractal (power law) scaling patterns over two distinct scale ranges, separated by a distinct break at the 15-40 m lag distance, depending on the site. Each fractal range represents a range of separation distances over which snow depth processes remain consistent. The scale break between fractal regions is a characteristic scale at which snow depth process relationships change fundamentally. Similar scale break distances in vegetation topography datasets suggest that the snow depth scale break represents a change in wind redistribution processes from wind

  16. Meso-scale machining capabilities and issues

    SciTech Connect

    BENAVIDES,GILBERT L.; ADAMS,DAVID P.; YANG,PIN

    2000-05-15

    Meso-scale manufacturing processes are bridging the gap between silicon-based MEMS processes and conventional miniature machining. These processes can fabricate two and three-dimensional parts having micron size features in traditional materials such as stainless steels, rare earth magnets, ceramics, and glass. Meso-scale processes that are currently available include, focused ion beam sputtering, micro-milling, micro-turning, excimer laser ablation, femto-second laser ablation, and micro electro discharge machining. These meso-scale processes employ subtractive machining technologies (i.e., material removal), unlike LIGA, which is an additive meso-scale process. Meso-scale processes have different material capabilities and machining performance specifications. Machining performance specifications of interest include minimum feature size, feature tolerance, feature location accuracy, surface finish, and material removal rate. Sandia National Laboratories is developing meso-scale electro-mechanical components, which require meso-scale parts that move relative to one another. The meso-scale parts fabricated by subtractive meso-scale manufacturing processes have unique tribology issues because of the variety of materials and the surface conditions produced by the different meso-scale manufacturing processes.

  17. Aerosols and Convection: Global scale, MJO Scale and Regional Scale Analyses

    NASA Astrophysics Data System (ADS)

    Rutledge, S. A.

    2014-12-01

    We have investigated interactions between atmospheric thermodynamics, boundary layer aerosol (CCN) concentrations, convective intensity and lightning flash rates (from the TRMM LIS and the Vaisala GLD 360 global network) on three distinct scales, including the global tropical ocean and land masses, the Madden Julian Oscillation genesis region over the central Indian Ocean (CIO) region, and four regions in the U.S., Washington D.C., northern Alabama, central Oklahoma and eastern Colorado. The U.S. locations are each supported by VHF Lightning Mapping Arrays. Total lightning density is shown to increase by a factor of 2-3 as a function of CCN concentration over tropical land and ocean regions. The greatest sensitivity in the lightning vs. aerosol relationship was found in more unstable environments and where warm-cloud depth was intermediate (deep) over land (ocean). The maximum height of 30 dBZ echo tops in lightning producing convective features was found to be insensitive to changes in CCN concentration. However, the vertical profile of radar reflectivity (VPRR) showed a consistent increase of 2-4 dBZ for convective features that developed in more polluted environments, suggesting that aerosols may act to intensify the convection, but not necessarily make the convection deeper. These findings are consistent with the hypothesis that aerosols act to invigorate convection by influencing the evolution of a cloud's hydrometeor populations. For the regional scale analysis, storms in Colorado have favorable thermodynamics (high cloud bases, shallow warm cloud depths and large CAPE's) that aerosols (CCN) appear to have little effect in a bulk sense. For the three remaining regions, storms forming in environments with CCN concentrations between 700 and 1200 cm-3 have notably stronger VPRR and larger flash rates. For aerosol concentrations below and above this range, storms have less vigor and reduced flash rates, consistent with the Rosenfeld et al. (2008) study. Finally

  18. [Identification and analysis on the error of Guanyuan (CV 4) point in Yulong Ge (Jade Dragon Verse)].

    PubMed

    Gang, Wei-juan; Huang, Long-xiang

    2009-02-01

    After investigation on the contents about Yulong Ge (Jade Dragon Verse) and Guanyuan (CV 4) in Chinese ancient medical works of the successive dynasties, the authors of the present paper found some errors of recording on CV4. In fact, Guanyuan (CV 4) in the current edition Yulong Ge should be the extra point Lanmen. The author hold that this error mainly results from similar writing in Chinese character, repeated copy, such as [Chinese characters: see text] etc.

  19. A 4-point in-situ method to locate a discrete gamma-ray source in 3-D space.

    PubMed

    Byun, Jong-In; Choi, Hee-Yeoul; Yun, Ju-Yong

    2010-02-01

    The determination of the source position (x,y,z) of a discrete gamma-ray source using peak count rates from four measurement points was studied. We derived semi-empirical formulas to find the position under the condition to neglect attenuation effects by obstacles between the target source and the detector. To validate the methodology, we performed the locating experiments for a (137)Cs small volume source placed at 10 different positions on the floor of a laboratory using the formulas derived in this study. In this study, a portable HPGe gamma spectrometry system with a virtual point detector concept was used. The calculation results for the source positions were compared with reference values measured with a rule. The applicability of the methodology was estimated based on the differences of the results. PMID:19932029

  20. Technical Performance Evaluation of the MyT4 Point of Care Technology for CD4+ T Cell Enumeration

    PubMed Central

    Mwau, Matilu; Kadima, Silvia; Mwende, Joy; Adhiambo, Maureen; Akinyi, Catherine; Prescott, Marta; Lusike, Judi; Hungu, Jackson; Vojnov, Lara

    2014-01-01

    Objective Though absolute CD4+ T cell enumeration is the primary gateway to antiretroviral therapy initiation for HIV-positive patients in all developing countries, patient access to this critical diagnostic test is relatively poor. We technically evaluated the performance of a newly developed point-of-care CD4+ T cell technology, the MyT4, compared with conventional CD4+ T cell testing technologies. Design Over 250 HIV-positive patients were consecutively enrolled and their blood tested on the MyT4, BD FACSCalibur, and BD FACSCount. Results Compared with the BD FACSCount, the MyT4 had an r2 of 0.7269 and a mean bias of −23.37 cells/µl. Compared with the BD FACSCalibur, the MyT4 had an r2 of 0.5825 and a mean bias of −46.58 cells/µl. Kenya currently uses a CD4+ T cell test threshold of 350 cells/µl to determine patient eligibility for antiretroviral therapy. At this threshold, the MyT4 had a sensitivity of 95.3% (95% CI: 88.4–98.7%) and a specificity of 87.9% (95% CI: 82.3–92.3%) compared with the BD FACSCount and sensitivity and specificity of 88.2% (95% CI: 79.4–94.2%) and 84.2% (95% CI: 78.2–89.2%), respectively, compared with the BD FACSCalibur. Finally, the MyT4 had a coefficient of variation of 12.80% compared with 14.03% for the BD FACSCalibur. Conclusions We conclude that the MyT4 performed well at the current 350 cells/µl ART initiation eligibility threshold when used by lower cadres of health care facility staff in rural clinics compared to conventional CD4+ T cell technologies. PMID:25229408

  1. Hubble Space Telescope Scale Model

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This is a photograph of a 1/15 scale model of the Hubble Space Telescope (HST). The HST is the product of a partnership between NASA, European Space Agency Contractors, and the international community of astronomers. It is named after Edwin P. Hubble, an American Astronomer who discovered the expanding nature of the universe and was the first to realize the true nature of galaxies. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The major elements of the HST are the Optical Telescope Assembly (OTA), the Support System Module (SSM), and the Scientific Instruments (SI). The HST is 42.5-feet (13- meters) long and weighs about 25,000 pounds (11,600 kilograms). The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors. The Lockheed Missile and Space Company of Sunnyvale, California produced the protective outer shroud and spacecraft systems, and assembled and tested the finished telescope.

  2. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  3. A clinimetric overview of scar assessment scales.

    PubMed

    van der Wal, M B A; Verhaegen, P D H M; Middelkoop, E; van Zuijlen, P P M

    2012-01-01

    Standardized validated evaluation instruments are mandatory to increase the level of evidence in scar management. Scar assessment scales are potentially suitable for this purpose, but the most appropriate scale still needs to be determined. This review will elaborate on several clinically relevant scar features and critically discuss the currently available scar scales in terms of basic clinimetric requirements. Many current scales can produce reliable measurements but seem to require multiple observers to obtain these results reliably, which limits their feasibility in clinical practice. The validation process of scar scales is hindered by the lack of a "gold standard" in subjective scar assessment or other reliable objective instruments which are necessary for a good comparison. The authors conclude that there are scar scales available that can reliably measure scar quality. However, further research may lead to improvement of their clinimetric properties and enhance the level of evidence in scar research worldwide.

  4. Small scale structure on cosmic strings

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1989-01-01

    The current understanding of cosmic string evolution is discussed, and the focus placed on the question of small scale structure on strings, where most of the disagreements lie. A physical picture designed to put the role of the small scale structure into more intuitive terms is presented. In this picture it can be seen how the small scale structure can feed back in a major way on the overall scaling solution. It is also argued that it is easy for small scale numerical errors to feed back in just such a way. The intuitive discussion presented here may form the basis for an analytic treatment of the small scale structure, which argued in any case would be extremely valuable in filling the gaps in the present understanding of cosmic string evolution.

  5. Mobility at the scale of meters.

    PubMed

    Surovell, Todd A; O'Brien, Matthew

    2016-05-01

    When archeologists discuss mobility, we are most often referring to a phenomenon that operates on the scale of kilometers, but much of human mobility, at least if measured in terms of frequency of movement, occurs at much smaller scales, ranging from centimeters to tens of meters. Here we refer to the movements we make within the confines of our homes or places of employment. With respect to nomadic peoples, movements at this scale would include movements within campsites. Understanding mobility at small scales is important to archeology because small-scale mobility decisions are a critical factor affecting spatial patterning observed in archeological sites. In this paper, we examine the factors affecting small-scale mobility decisions in a Mongolian reindeer herder summer camp and the implications of those decisions with regard to archeological spatial patterning. PMID:27312186

  6. Testing the barriers to healthy eating scale.

    PubMed

    Fowles, Eileen R; Feucht, Jeanette

    2004-06-01

    Clarifying barriers to dietary intake may identify factors that place pregnant women at risk for complications. This methodological study assessed the psychometric properties of the Barriers to Healthy Eating Scale. Item generation was based on constructs in Pender's health promotion model. The instrument was tested in two separate samples of pregnant women. Content validity was assessed, and construct validity testing resulted in an expected negative relationship between scores on the Barriers to Healthy Eating Scale and the Nutrition subscale of the Health Promoting Lifestyle Profile-II. Factor analysis resulted in a 5-factor scale that explained 73% of the variance. Alpha coefficients for the total scale ranged from.73 to.77, and subscales ranged from.48 to.99. Test-retest reliability for the total scale was.79. The Barriers to Healthy Eating Scale appears to be a reliable and valid instrument to assess barriers that may impede healthy eating in pregnant women.

  7. Scaling of Soil Moisture: A Hydrologic Perspective

    NASA Astrophysics Data System (ADS)

    Western, Andrew W.; Grayson, Rodger B.; Blöschl, Günter

    Soil moisture is spatially and temporally highly variable, and it influences a range of environmental processes in a nonlinear manner. This leads to scale effects that need to be understood for improved prediction of moisture dependent processes. We provide some introductory material on soil moisture, and then review results from the literature relevant to a variety of scaling techniques applicable to soil moisture. This review concentrates on spatial scaling with brief reference to results on temporal scaling. Scaling techniques are divided into behavioral techniques and process-based techniques. We discuss the statistical distribution of soil moisture, spatial correlation of soil moisture at scales from tens of meters to thousands of kilometers and related interpolation and regularization techniques, and the use of auxiliary variables such as terrain indices. Issues related to spatially distributed deterministic modeling of soil moisture are also briefly reviewed.

  8. Factors Affecting Scale Adhesion on Steel Forgings

    NASA Astrophysics Data System (ADS)

    Zitterman, J. A.; Bacco, R. P.; Boggs, W. E.

    1982-04-01

    Occasionally, undesirable "sticky" adherent scale forms on low-carbon steel during reheating for hot forging. The mechanical abrading or chemical pickling required to remove this scale adds appreciably to the fabrication cost. Characterization of the steel-scale system by metallographic examination, x-ray diffraction, and electron-probe microanalysis revealed that nickel, silicon, and/or sulfur might be involved in the mechanism of sticky-scale formation. Laboratory reheating tests were conducted on steels with varied concentrations of nickel and silicon in atmospheres simulating those resulting from burning natural gas or sulfur-bearing fuels. Subsequent characterization of the scale formed during the tests tends to confirm that the composition of the steel, especially increased nickel and silicon contents, and the presence of the sulfur in the furnace atmosphere cause the formation of this undesirable scale.

  9. Scaling crossover for the average avalanche shape

    NASA Astrophysics Data System (ADS)

    Papanikolaou, Stefanos; Bohn, Felipe; Sommer, Rubem L.; Durin, Gianfranco; Zapperi, Stefano; Sethna, James P.

    2010-03-01

    Universality and the renormalization group claim to predict all behavior on long length and time scales asymptotically close to critical points. In practice, large simulations and heroic experiments have been needed to unambiguously test and measure the critical exponents and scaling functions. We announce here the measurement and prediction of universal corrections to scaling, applied to the temporal average shape of Barkhausen noise avalanches. We bypass the confounding factors of time-retarded interactions (eddy currents) by measuring thin permalloy films, and bypass thresholding effects and amplifier distortions by applying Wiener deconvolution. We show experimental shapes that are approximately symmetric, and measure the leading corrections to scaling. We solve a mean-field theory for the magnetization dynamics and calculate the relevant demagnetizing-field correction to scaling, showing qualitative agreement with the experiment. In this way, we move toward a quantitative theory useful at smaller time and length scales and farther from the critical point.

  10. Gap Test Calibrations and Their Scaling

    NASA Astrophysics Data System (ADS)

    Sandusky, Harold

    2011-06-01

    Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations with water gaps will be provided and compared with PMMA gaps. Scaling for other donor systems will also be provided. Shock initiation data with water gaps will be reviewed.

  11. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  12. Scaling Rules for Pre-Injector Design

    SciTech Connect

    Tom Schwarz; Dan Amidei

    2003-07-13

    Proposed designs of the prebunching system of the NLC and TESLA are based on the assumption that scaling the SLC design to NLC/TESLA requirements should provide the desired performance. A simple equation is developed to suggest a scaling rule in terms of bunch charge and duration. Detailed simulations of prebunching systems scaled from a single design have been run to investigate these issues.

  13. How to calibrate the jet energy scale?

    SciTech Connect

    Hatakeyama, K.; /Rockefeller U.

    2006-01-01

    Top quarks dominantly decay into b-quark jets and W bosons, and the W bosons often decay into jets, thus the precise determination of the jet energy scale is crucial in measurements of many top quark properties. I present the strategies used by the CDF and D0 collaborations to determine the jet energy scale. The various cross checks performed to verify the determined jet energy scale and evaluate its systematic uncertainty are also discussed.

  14. [Psychometric properties of a scale: internal consistency].

    PubMed

    Campo-Arias, Adalberto; Oviedo, Heidi C

    2008-01-01

    Internal consistency reliability is the degree of correlation between a scale's items. Internal consistency is calculated by Kuder-Richardson's formula 20 for dichotomous choices and Cronbach's alpha for polytomous items. 0.70 to 0.90 internal consistency is acceptable. 5-25 participants are needed for each item when computing the internal consistency of a twenty-item scale. Internal consistency varies according to population and then it is necessary to report it always that scale is used. PMID:19360231

  15. Chirp Scaling Algorithms for SAR Processing

    NASA Technical Reports Server (NTRS)

    Jin, M.; Cheng, T.; Chen, M.

    1993-01-01

    The chirp scaling SAR processing algorithm is both accurate and efficient. Successful implementation requires proper selection of the interval of output samples, which is a function of the chirp interval, signal sampling rate, and signal bandwidth. Analysis indicates that for both airborne and spaceborne SAR applications in the slant range domain a linear chirp scaling is sufficient. To perform nonlinear interpolation process such as to output ground range SAR images, one can use a nonlinear chirp scaling interpolator presented in this paper.

  16. Scaling and Single Event Effects (SEE) Sensitivity

    NASA Technical Reports Server (NTRS)

    Oldham, Timothy R.

    2003-01-01

    This paper begins by discussing the potential for scaling down transistors and other components to fit more of them on chips in order to increasing computer processing speed. It also addresses technical challenges to further scaling. Components have been scaled down enough to allow single particles to have an effect, known as a Single Event Effect (SEE). This paper explores the relationship between scaling and the following SEEs: Single Event Upsets (SEU) on DRAMs and SRAMs, Latch-up, Snap-back, Single Event Burnout (SEB), Single Event Gate Rupture (SEGR), and Ion-induced soft breakdown (SBD).

  17. Improving the Factor Structure of Psychological Scales

    PubMed Central

    Zhang, Xijuan; Savalei, Victoria

    2015-01-01

    Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format, which replaces each response option in the Likert scale with a full sentence. We hypothesized that this format would result in a cleaner factor structure as compared with the Likert format. We tested this hypothesis on three popular psychological scales: the Rosenberg Self-Esteem scale, the Conscientiousness subscale of the Big Five Inventory, and the Beck Depression Inventory II. Scales in both formats showed comparable reliabilities. However, scales in the Expanded format had better (i.e., lower and more theoretically defensible) dimensionalities than scales in the Likert format, as assessed by both exploratory factor analyses and confirmatory factor analyses. We encourage further study and wider use of the Expanded format, particularly when a scale’s dimensionality is of theoretical interest. PMID:27182074

  18. Scale criticality in estimating ecosystem carbon dynamics.

    PubMed

    Zhao, Shuqing; Liu, Shuguang

    2014-07-01

    Scaling is central to ecology and Earth system sciences. However, the importance of scale (i.e. resolution and extent) for understanding carbon dynamics across scales is poorly understood and quantified. We simulated carbon dynamics under a wide range of combinations of resolution (nine spatial resolutions of 250 m, 500 m, 1 km, 2 km, 5 km, 10 km, 20 km, 50 km, and 100 km) and extent (57 geospatial extents ranging from 108 to 1 247 034 km(2) ) in the southeastern United States to explore the existence of scale dependence of the simulated regional carbon balance. Results clearly show the existence of a critical threshold resolution for estimating carbon sequestration within a given extent and an error limit. Furthermore, an invariant power law scaling relationship was found between the critical resolution and the spatial extent as the critical resolution is proportional to A(n) (n is a constant, and A is the extent). Scale criticality and the power law relationship might be driven by the power law probability distributions of land surface and ecological quantities including disturbances at landscape to regional scales. The current overwhelming practices without considering scale criticality might have largely contributed to difficulties in balancing carbon budgets at regional and global scales. PMID:24323616

  19. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1965-01-01

    Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.

  20. Euthanasia attitude; A comparison of two scales

    PubMed Central

    Aghababaei, Naser; Farahani, Hojjatollah; Hatami, Javad

    2011-01-01

    The main purposes of the present study were to see how the term “euthanasia” influences people’s support for or opposition to euthanasia; and to see how euthanasia attitude relates to religious orientation and personality factors. In this study two different euthanasia attitude scales were compared. 197 students were selected to fill out either the Euthanasia Attitude Scale (EAS) or Wasserman’s Attitude Towards Euthanasia scale (ATE scale). The former scale includes the term “euthanasia”, the latter does not. All participants filled out 50 items of International Personality Item Pool, 16 items of the the HEXACO openness, and 14 items of Religious Orientation Scale-Revised. Results indicated that even though the two groups were not different in terms of gender, age, education, religiosity and personality, mean score on the ATE scale was significantly higher than that of the EAS. Euthanasia attitude was negatively correlated with religiosity and conscientiousness and it was positively correlated with psychoticism and openness. It can be concluded that analyzing the attitude towards euthanasia with the use of EAS rather than the ATE scale results in lower levels of opposition against euthanasia. This study raises the question of whether euthanasia attitude scales should contain definitions and concepts of euthanasia or they should describe cases of it. PMID:23908751