Science.gov

Sample records for agitation-sedation scale rass

  1. Radar Attitude Sensing System (RASS)

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The initial design and fabrication efforts for a radar attitude sensing system (RASS) are covered. The design and fabrication of the RASS system is being undertaken in two phases, 1B1 and 1B2. The RASS system as configured under phase 1B1 contains the solid state transmitter and local oscillator, the antenna system, the receiving system, and the altitude electronics. RASS employs a pseudo-random coded cw signal and receiver correlation techniques to measure range. The antenna is a planar, phased array, monopulse type, whose beam is electronically steerable using diode phase shifters. The beam steering computer and attitude sensing circuitry are to be included in Phase 1B2 of the program.

  2. High Throughput Exposure Forecasts for Environmental Chemical Risk (SOT RASS)

    EPA Science Inventory

    Email Announcement to RASS: On December 11th we have rescheduled the webinar regarding progress and advances in exposure assessment, which was cancelled due to the government shutdown in October. Dr. Elaine Hubal, Deputy Director of the Chemical Safety for Sustainability (CSS) n...

  3. The Development of a Plan for the Assessment, Improvement and Deployment of a Radar Acoustic Sounding System (RASS) for Wake Vortex Detection

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; McLaughlin, Dennis K.; Gabrielson, Thomas B.; Boluriaan, Said

    2004-01-01

    This report describes the activities completed under a grant from the NASA Langley Research Center to develop a plan for the assessment, improvement, and deployment of a Radar Acoustic Sounding System (RASS) for the detection of wake vortices. A brief review is provided of existing alternative instruments for wake vortex detection. This is followed by a review of previous implementations and assessment of a RASS. As a result of this review, it is concluded that the basic features of a RASS have several advantages over other commonly used wake vortex detection and measurement systems. Most important of these features are the good fidelity of the measurements and the potential for all weather operation. To realize the full potential of this remote sensing instrument, a plan for the development of a RASS designed specifically for wake vortex detection and measurement has been prepared. To keep costs to a minimum, this program would start with the development an inexpensive laboratory-scale version of a RASS system. The new instrument would be developed in several stages, each allowing for a critical assessment of the instrument s potential and limitations. The instrument, in its initial stages of development, would be tested in a controlled laboratory environment. A jet vortex simulator, a prototype version of which has already been fabricated, would be interrogated by the RASS system. The details of the laboratory vortex would be measured using a Particle Image Velocimetry (PIV) system. In the early development stages, the scattered radar signal would be digitized and the signal post-processed to determine how extensively and accurately the RASS could measure properties of the wake vortex. If the initial tests prove to be successful, a real-time, digital signal processing system would be developed as a component of the RASS system. At each stage of the instrument development and testing, the implications of the scaling required for a full-scale instrument would be

  4. Evaluation of the Interpretation of Ceilometer Data with RASS and Radiosonde Data

    NASA Astrophysics Data System (ADS)

    Emeis, Stefan; Schäfer, Klaus; Münkel, Christoph; Friedl, Roman; Suppan, Peter

    2012-04-01

    Since 2006 different remote monitoring methods for determining mixing-layer height have been operated in parallel in Augsburg (Germany). One method is based on the operation of eye-safe commercial mini-lidar systems (ceilometers). The optical backscatter intensities recorded with ceilometers provide information about the range-dependent aerosol concentration; gradient minima within this profile mark the tops of mixed layers. Special software for these ceilometers provides routine retrievals of lower atmospheric layering. A second method, based on sodar observations, detects the height of a turbulent layer characterized by high acoustic backscatter intensities due to thermal fluctuations and a high variance of the vertical velocity component. This information is extended by measurements with a radio-acoustic sounding system (RASS) that directly provides the vertical temperature profile from the detection of acoustic signal propagation and thus temperature inversions that mark atmospheric layers. Ceilometer backscatter information is evaluated by comparison with parallel measurements. Data are presented from 2 years of combined ceilometer and RASS measurements at the same site and from comparison with a nearby (60 km) radiosonde for larger-scale humidity information. This evaluation is designed to ensure mixing-layer height monitoring from ceilometer data more reliable.

  5. Resource Allocation Support System (RASS): Summary report of the 1992 pilot study

    SciTech Connect

    Buehring, W.A.; Whitfield, R.G.; Wolsko, T.D.; Kier, P.H.; Absil, M.J.G.; Jusko, M.J.; Sapinski, P.F.

    1993-02-01

    The Resource Allocation Support System (RASS) is a decision-aiding system being developed to assist the US Department of Energy`s (DOE`s) Office of Waste Management in program and budget decision making. Four pilot studies were conducted at DOE field offices in summer 1992 to evaluate and improve the RASS design. This report summarizes the combined results of the individual field office pilot studies. Results are presented from different perspectives to illustrate the type of information that would be available from RASS. Lessons learned and directions for future RASS developments are also presented.

  6. A study of the effects of an additional sound source on RASS performance

    SciTech Connect

    Coulter, R.L.

    1998-12-31

    The Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site of the Atmospheric Radiation Measurements (ARM) Program continuously operates a nine panel 915 MHz wind profiler with Radio Acoustic Sounding System (RASS), measuring wind profiles for 50 minutes and virtual temperature profiles for the remaining 10 minutes during each hour. It is well recognized that one of the principal limits on RASS performance is high horizontal wind speed that moves the acoustic wave front sufficiently to prevent the microwave energy produced by the radar and scattered from the acoustic wave from being reflected back t the radar antenna. With this limitation in mind, the ARM program purchased an additional, portable acoustic source that could be mounted on a small trailer and placed in strategic locations to enhance the RASS performance (when it was not being used for spare parts). A test of the resulting improvement in RASS performance was performed during the period 1995--1997.

  7. Autocorrelation Analysis of Meteorological Data from a RASS Sodar.

    NASA Astrophysics Data System (ADS)

    Pérez, Isidro A.; Ángeles García, M.; Sánchez, M. Luisa; de Torre, Beatriz

    2004-08-01

    Autocorrelation analysis is necessary in persistence studies and identification of cyclical processes. In this paper, autocorrelations of available wind speed and temperature data from a radio acoustic sounding system (RASS) sodar were calculated. This device was placed on flat terrain, and the measuring campaign extended over April 2001. Ten-minute averages were considered from 40 to 500 m in 20-m levels. The direction frequency rose indicated clear, prevailing directions in the east-northeast west-southwest axis. Analysis of median temperatures revealed that east-northeast advections were 5°C colder than those from the west-southwest. A defined pattern was obtained for both autocorrelations, comprising deterministic and random parts. Noise became more relevant at the higher levels. The deterministic part could be considered as an initial fast-decaying term with the addition of two harmonic functions. The initial decay, linked to fast changes, increased with height for wind speed and decreased for temperature. A diurnal cycle was relevant at intermediate levels for wind speed and at lower temperature levels. The absence of the surface influence added to the horizontal movement associated with the stable night stratification and diurnal convection produced a sharp daily contrast in wind speed at intermediate levels. The influence of the surface decreased with height for temperature. The second cycle was linked to changes in the synoptic pattern and had a 5 6-day period. It was more relevant at lower levels for wind speed, and its amplitude decreased with height. For temperature, this second cycle was less significant. Following these assumptions, a model for the autocorrelation function was proposed and its coefficients are calculated by means of a simple method—a multiple linear regression beyond the first day and a simple linear regression for the first 12-h residuals. This model proved satisfactory, especially below 300 m. A rough height parameterization has

  8. The U.S. geological survey rass-statpac system for management and statistical reduction of geochemical data

    USGS Publications Warehouse

    VanTrump, G., Jr.; Miesch, A.T.

    1977-01-01

    RASS is an acronym for Rock Analysis Storage System and STATPAC, for Statistical Package. The RASS and STATPAC computer programs are integrated into the RASS-STATPAC system for the management and statistical reduction of geochemical data. The system, in its present form, has been in use for more than 9 yr by scores of U.S. Geological Survey geologists, geochemists, and other scientists engaged in a broad range of geologic and geochemical investigations. The principal advantage of the system is the flexibility afforded the user both in data searches and retrievals and in the manner of statistical treatment of data. The statistical programs provide for most types of statistical reduction normally used in geochemistry and petrology, but also contain bridges to other program systems for statistical processing and automatic plotting. ?? 1977.

  9. COMBINING A MONOSTATIC SODAR WITH A RADAR WIND PROFILER AND RASS IN A POWER PLANT POLLUTION STUDY

    EPA Science Inventory

    A single-beam monostatic sodar, radar wind profiler, radio acoustic sounding system (RASS), and in situ sensors mounted on a 100-m tower were used to acquire meteorological data in the vicinity of a coal burning power plant in a northern Thailand valley. hese data were used to ex...

  10. Serial Administration of a Modified Richmond Agitation and Sedation Scale for Delirium Screening

    PubMed Central

    Chester, Jennifer Gonik; Harrington, Mary Beth; Rudolph, James

    2016-01-01

    Objectives Because delirium is common and frequently unrecognized, this study sought to design a brief screening tool for a core feature of mental status and to validate the instrument as a serial assessment for delirium. Design Prospective cohort Setting Tertiary VA Hospital in New England Participants 100 Veterans admitted to the medical service Methods A consensus panel developed a modified version of the Richmond Agitation and Sedation Scale (RASS) to capture alterations in consciousness. Upon admission and daily thereafter, patients were screened with the modified RASS and independently, underwent a comprehensive mental status interview by a geriatric expert, who determined if the criteria for delirium were met. The sensitivity, specificity, and positive likelihood ratio (LR) of the modified RASS for delirium are reported. Results As a single assessment, the modified RASS had a sensitivity of 64% and a specificity of 93% for delirium (LR=9.4). When used to detect change, serial modified RASS assessments had a sensitivity of 74% and a specificity of 92% (LR=8.9) in both prevalent and incident delirium. When prevalent cases were excluded, any change in the modified RASS had a sensitivity of 85% and a specificity of 92% for incident delirium (LR=10.2) Conclusion When administered daily, the modified RASS has good sensitivity and specificity for incident delirium. Given the brevity of the instrument (approximately 15 seconds), consideration should be given to incorporating the modified RASS as a daily screening measure for consciousness and delirium. PMID:22173963

  11. National Geochemical Database, U.S. Geological Survey RASS (Rock Analysis Storage System) geochemical data for Alaska

    USGS Publications Warehouse

    Bailey, E.A.; Smith, D.B.; Abston, C.C.; Granitto, Matthew; Burleigh, K.A.

    1999-01-01

    This dataset contains geochemical data for Alaska produced by the analytical laboratories of the Geologic Division of the U.S. Geological Survey (USGS). These data represent analyses of stream-sediment, heavy-mineral-concentrate (derived from stream sediment), soil, and organic material samples. Most of the data comes from mineral resource investigations conducted in the Alaska Mineral Resource Assessment Program (AMRAP). However, some of the data were produced in support of other USGS programs. The data were originally entered into the in-house Rock Analysis Storage System (RASS) database. The RASS database, which contains over 580,000 data records, was used by the Geologic Division from the early 1970's through the late 1980's to archive geochemical data. Much of the data have been previously published in paper copy USGS Open-File Reports by the submitter or the analyst but some of the data have never been published. Over the years, USGS scientists recognized several problems with the database. The two primary issues were location coordinates (either incorrect or lacking) and sample media (not precisely identified). This dataset represents a re-processing of the original RASS data to make the data accessible in digital format and more user friendly. This re-processing consisted of checking the information on sample media and location against the original sample submittal forms, the original analytical reports, and published reports. As necessary, fields were added to the original data to more fully describe the sample preparation methods used and sample medium analyzed. The actual analytical data were not checked in great detail, but obvious errors were corrected.

  12. Identifying pediatric emergence delirium by using the PAED Scale: a quality improvement project.

    PubMed

    Stamper, Matthew J; Hawks, Sharon J; Taicher, Brad M; Bonta, Juliet; Brandon, Debra H

    2014-04-01

    Pediatric emergence delirium is a postoperative phenomenon characterized by aberrant cognitive and psychomotor behavior, which can place the patient and health care personnel at risk for injury. A common tool for identifying emergence delirium is the Level of Consciousness-Richmond Agitation and Sedation Scale (LOC-RASS), although it has not been validated for use in the pediatric population. The Pediatric Anesthesia Emergence Delirium Scale (PAED) is a newly validated tool to measure emergence delirium in children. We chose to implement and evaluate the effectiveness and fidelity of using the PAED Scale to identify pediatric emergence delirium in one eight-bed postanesthesia care unit in comparison with the traditional LOC-RASS. The overall incidence of pediatric emergence delirium found by using the LOC-RASS with a retrospective chart review (3%) was significantly lower than the incidence found by using the LOC-RASS (7.5%) and PAED Scale (11.5%) during the implementation period. Our findings suggest that the PAED Scale may be a more sensitive measure of pediatric emergence delirium, and, in the future, we recommend that health care personnel at our facility use the PAED Scale rather than the LOC-RASS. PMID:24674794

  13. Factor Structure of the Restricted Academic Situation Scale: Implications for ADHD

    ERIC Educational Resources Information Center

    Karama, Sherif; Amor, Leila Ben; Grizenko, Natalie; Ciampi, Antonio; Mbekou, Valentin; Ter-Stepanian, Marina; Lageix, Philippe; Baron, Chantal; Schwartz, George; Joober, Ridha

    2009-01-01

    Background: To study the factor structure of the Restricted Academic Situation Scale (RASS), a psychometric tool used to assess behavior in children with ADHD, 117 boys and 21 girls meeting "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.; "DSM-IV") criteria for ADHD and aged between 6 and 12 years were recruited. Assessments were…

  14. The Effects of Guided Imagery on Patients Being Weaned from Mechanical Ventilation

    PubMed Central

    Spiva, LeeAnna; Hart, Patricia L.; Gallagher, Erin; McVay, Frank; Garcia, Melida; Malley, Karen; Kadner, Marsha; Segars, Angela; Brakovich, Betsy; Horton, Sonja Y.; Smith, Novlette

    2015-01-01

    The study purpose was to assess the effects of guided imagery on sedation levels, sedative and analgesic volume consumption, and physiological responses of patients being weaned from mechanical ventilation. Forty-two patients were selected from two community acute care hospitals. One hospital served as the comparison group and provided routine care (no intervention) while the other hospital provided the guided imagery intervention. The intervention included two sessions, each lasting 60 minutes, offered during morning weaning trials from mechanical ventilation. Measurements were recorded in groups at baseline and 30- and 60-minute intervals and included vital signs and Richmond Agitation-Sedation Scale (RASS) score. Sedative and analgesic medication volume consumption were recorded 24 hours prior to and after the intervention. The guided imagery group had significantly improved RASS scores and reduced sedative and analgesic volume consumption. During the second session, oxygen saturation levels significantly improved compared to the comparison group. Guided imagery group had 4.88 less days requiring mechanical ventilation and 1.4 reduction in hospital length of stay compared to the comparison group. Guided imagery may be complementary and alternative medicine (CAM) intervention to provide during mechanical ventilation weaning trials. PMID:26640501

  15. Efficacy of the methoxyflurane as bridging analgesia during epidural placement in laboring parturient

    PubMed Central

    Anwari, Jamil S.; Khalil, Laith; Terkawi, Abdullah S.

    2015-01-01

    Background: Establishing an epidural in an agitated laboring woman can be challenging. The ideal pain control technique in such a situation should be effective, fast acting, and short lived. We assessed the efficacy of inhalational methoxyflurane (Penthrox™) analgesia as bridging analgesia for epidural placement. Materials and Methods: Sixty-four laboring women who requested epidural analgesia with pain score of ≥7 enrolled in an observational study, 56 of which completed the study. The parturients were instructed to use the device prior to the onset of uterine contraction pain and to stop at the peak of uterine contraction, repeatedly until epidural has been successfully placed. After each (methoxyflurane inhalation-uterine contraction) cycle, pain, Richmond Agitation Sedation Scale (RASS), nausea and vomiting were evaluated. Maternal and fetal hemodynamics and parturient satisfaction were recorded. Results: The mean baseline pain score was 8.2 ± 1.5 which was reduced to 6.2 ± 2.0 after the first inhalation with a mean difference of 2.0 ± 1.1 (95% confidence interval 1.7-2.3, P < 0.0001), and continued to decrease significantly over the study period (P < 0.0001). The RASS scores continuously improved after each cycle (P < 0.0001). Only 1 parturient from the cohort became lightly sedated (RASS = −1). Two parturients vomited, and no significant changes in maternal hemodynamics or fetal heart rate changes were identified during treatment. 67% of the parturients reported very good or excellent satisfaction with treatment. Conclusion: Penthrox™ provides rapid, robust, and satisfactory therapy to control pain and restlessness during epidural placement in laboring parturient. PMID:26543451

  16. [Sedation and analgesia assessment tools in ICU patients].

    PubMed

    Thuong, M

    2008-01-01

    Sedative and analgesic treatment administered to critically ill patients need to be regularly assessed to ensure that predefinite goals are well achieved as the risk of complications of oversedation is minimized. In most of the cases, which are lightly sedation patients, the goal to reach is a calm, cooperative and painless patient, adapted to the ventilator. Recently, eight new bedside scoring systems to monitor sedation have been developed and mainly tested for reliability and validity. The choice of a sedation scale measuring level of consciousness, could be made between the Ramsay sedation scale, the Richmond Agitation Sedation scale (RASS) and the Adaptation to The Intensive Care Environment scale-ATICE. The Behavioral Pain Scale (BPS) is a behavioral pain scale. Two of them have been tested with strong evidence of their clinimetric properties: ATICE, RASS. The nurses'preference for a convenient tool could be defined by the level of reliability, the level of clarity, the variety of sedation and agitation states represented user friendliness and speed. In fine, the choice between a simple scale easy to use and a well-defined and complex scale has to be discussed and determined in each unit. Actually, randomized controlled studies are needed to assess the potential superiority of one scale compared with others scales, including evaluation of the reliability and the compliance to the scale. The usefulness of the BIS in ICU for patients lightly sedated is limited, mainly because of EMG artefact, when subjective scales are more appropriated in this situation. On the other hand, subjective scales are insensitive to detect oversedation in patients requiring deep sedation. The contribution of the BIS in deeply sedation patients, patients under neuromuscular blockade or barbiturates has to be proved. Pharmacoeconomics studies are lacking. PMID:18602791

  17. Sensitivity of Scales to Evaluate Change in Symptomatology with Psychostimulants in Different ADHD Subtypes

    PubMed Central

    Grizenko, Natalie; Rodrigues Pereira, Ricardo M.; Joober, Ridha

    2013-01-01

    Objective: To assess the sensitivity of scales (Conners’ Global Index Parent and Teacher form [CGI-P, CGI-T], Clinical Global Impression Scale [CGI], Continuous Performance Test [CPT], and Restricted Academic Situation Scale [RASS]) in evaluating improvement in symptomatology with methylphenidate in different Attention Deficit Hyperactivity Disorder (ADHD) subtypes. Method: Four hundred and ninety children (309 with ADHD Combined/Hyperactive [ADHD-CH] and 181 with ADHD Inattentive subtype [ADHD-I]) participated in a two week double-blind placebo-controlled crossover methylphenidate trial. Results: CGI-P showed small effect size for ADHD-I and medium effect size for the ADHD-CH subtype. CGI-T showed medium effect size for ADHD-I and large effect size for ADHD-CH subtype. CGI and RASS showed large effect size while CPT showed medium effect size for both subtypes. Conclusion: Acute behavioural assessments by clinicians (CGI, RASS) are better at detecting improvement with medication in all subtypes than parent or teacher reports (CGI-P, CGI-T). CGI-T is better than CGI-P for ADHD-I in detecting change in symptomatology as there is a greater demand for attention at school. PMID:23667362

  18. AGN and QSOs in the eROSITA All-Sky Survey. II. The large-scale structure

    NASA Astrophysics Data System (ADS)

    Kolodzig, Alexander; Gilfanov, Marat; Hütsi, Gert; Sunyaev, Rashid

    2013-10-01

    The four-year X-ray all-sky survey (eRASS) of the eROSITA telescope aboard the Spektrum-Roentgen-Gamma satellite will detect about 3 million active galactic nuclei (AGN) with a median redshift of z ≈ 1 and a typical luminosity of L0.5-2.0 keV ~ 1044 ergs-1. We show that this unprecedented AGN sample, complemented with redshift information, will supply us with outstanding opportunities for large-scale structure research. For the first time, detailed redshift- and luminosity-resolved studies of the bias factor for X-ray selected AGN will become possible. The eRASS AGN sample will not only improve the redshift- and luminosity resolution of these studies, but will also expand their luminosity range beyond L0.5-2.0 keV ~ 1044 ergs-1, thus enabling a direct comparison of the clustering properties of luminous X-ray AGN and optical quasars. These studies will dramatically improve our understanding of the AGN environment, triggering mechanisms, the growth of supermassive black holes and their co-evolution with dark matter halos. The eRASS AGN sample will become a powerful cosmological probe. It will enable detecting baryonic acoustic oscillations (BAOs) for the first time with X-ray selected AGN. With the data from the entire extragalactic sky, BAO will be detected at a ≳10σ confidence level in the full redshift range and with ~8σ confidence in the 0.8 < z < 2.0 range, which is currently not covered by any existing BAO surveys. To exploit the full potential of the eRASS AGN sample, photometric and spectroscopic surveys of large areas and a sufficient depth will be needed.

  19. The SRG/eROSITA All-Sky Survey: A new era of large-scale structure studies with AGN

    NASA Astrophysics Data System (ADS)

    Kolodzig, Alexander; Gilfanov, Marat; Hütsi, Gert; Sunyaev, Rashid

    2015-08-01

    The four-year X-ray All-Sky Survey (eRASS) of the eROSITA telescope aboard the Spektrum-Roentgen-Gamma (SRG) satellite will detect about 3 million active galactic nuclei (AGN) with a median redshift of z~1 and typical luminosity of L0.5-2.0keV ~ 1044 erg/s. We demonstrate that this unprecedented AGN sample, complemented with redshift information, will supply us with outstanding opportunities for large-scale structure (LSS) studies.We show that with this sample of X-ray selected AGN, it will become possible for the first time to perform detailed redshift- and luminosity-resolved studies of the AGN clustering. This enable us to put strong constraints on different AGN triggering/fueling models as a function of AGN environment, which will dramatically improve our understanding of super-massive black hole growth and its correlation with the co-evolving LSS.Further, the eRASS AGN sample will become a powerful cosmological probe. We demonstrate for the first time that, given the breadth and depth of eRASS, it will become possible to convincingly detect baryonic acoustic oscillations (BAOs) with ~8σ confidence in the 0.8 < z < 2.0 range, currently uncovered by any existing BAO survey.Finally, we discuss the requirements for follow-up missions and demonstrate that in order to fully exploit the potential of the eRASS AGN sample, photometric and spectroscopic surveys of large areas and a sufficient depth will be needed.

  20. Scales

    MedlinePlus

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Eczema , ringworm , and psoriasis ...

  1. Scale

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2009-01-01

    The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail…

  2. Scales

    ScienceCinema

    Murray Gibson

    2010-01-08

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  3. Scales

    SciTech Connect

    Murray Gibson

    2007-04-27

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  4. Rethinking Critical Care: Decreasing Sedation, Increasing Delirium Monitoring, and Increasing Patient Mobility

    PubMed Central

    Bassett, Rick; Adams, Kelly McCutcheon; Danesh, Valerie; Groat, Patricia M.; Haugen, Angie; Kiewel, Angi; Small, Cora; Van-Leuven, Mark; Venus, Sam; Ely, E. Wesley

    2016-01-01

    Background/Methods Sedation management, delirium monitoring, and mobility programs are key features of recent evidence-based critical care guidelines and care bundles, yet implementation in the intensive care unit (ICU) remains highly variable. The Institute for Healthcare Improvement’s Rethinking Critical Care (IHI-RCC) program was established to reduce harm of critically ill patients by decreasing sedation, increasing monitoring and management of delirium, and increasing patient mobility. It involved one live case study and five iterations of an in-person seminar over 33 months (March 2011 to November 2013) that emphasized interdisciplinary teamwork and culture change. IHI-RCC has involved over 650 participants from 215 organizations. This report describes a convenience sample of five participating organizations chosen in advance of knowing their clinical outcomes. Results Qualitative descriptions of the changes tested at each of the five case study sites are provided, demonstrating the necessary teamwork, improved processes, and increased reliability of daily work. These sites all worked to implement the Richmond Agitation Sedation Scale (RASS) and Confusion Assessment Method for the ICU (CAM-ICU) within the context of a bundled interventional care plan; they then tracked length of stay in the ICU and duration of mechanical ventilation, which are reported. Discussion Changing critical care practices requires an interdisciplinary approach addressing cultural, psychological, and practical issues. The IHI-RCC program is based on testing changes on a small scale, building highly effective interdisciplinary rounds, frequent data feedback to the frontline, and use of in-person demonstrations. Key lessons are emerging about effectively caring for critically ill patients in light of data about the harm of over-sedation, unrecognized and unaddressed delirium, and immobility. PMID:25976892

  5. Pupillary reflex measurement predicts insufficient analgesia before endotracheal suctioning in critically ill patients

    PubMed Central

    2013-01-01

    Introduction This study aimed to evaluate the pupillary dilatation reflex (PDR) during a tetanic stimulation to predict insufficient analgesia before nociceptive stimulation in the intensive care unit (ICU). Methods In this prospective non-interventional study in a surgical ICU of a university hospital, PDR was assessed during tetanic stimulation (of 10, 20 or 40 mA) immediately before 40 endotracheal suctionings in 34 deeply sedated patients. An insufficient analgesia during endotracheal suction was defined by an increase of ≥1 point on the Behavioral Pain Scale (BPS). Results A total of 27 (68%) patients had insufficient analgesia. PDR with 10 mA, 20 mA and 40 mA stimulation was higher in patients with insufficient analgesia (P <0.01). The threshold values of the pupil diameter variation during a 10, 20 and 40 mA tetanic stimulation to predict insufficient analgesia during an endotracheal suctioning were 1, 5 and 13% respectively. The areas (95% confidence interval) under the receiver operating curve were 0.70 (0.54 to 0.85), 0.78 (0.61 to 0.91) and 0.85 (0.721 to 0.954) with 10, 20 and 40 mA tetanic stimulations respectively. A sensitivity analysis using the Richmond Agitation Sedation Scale (RASS) confirmed the results. The 40 mA stimulation was poorly tolerated. Conclusions In deeply sedated mechanically ventilated patients, a pupil diameter variation ≥5% during a 20 mA tetanic stimulation was highly predictable of insufficient analgesia during endotracheal suction. A 40 mA tetanic stimulation is painful and should not be used. PMID:23883683

  6. Level of Agitation of Psychiatric Patients Presenting to an Emergency Department

    PubMed Central

    Zun, Leslie S.; Downey, La Vonne A.

    2008-01-01

    Objectives: The primary purpose of this study was to determine the level of agitation that psychiatric patients exhibit upon arrival to the emergency department. The secondary purpose was to determine whether the level of agitation changed over time depending upon whether the patient was restrained or unrestrained. Method: An observational study enrolling a convenience sample of 100 patients presenting with a psychiatric complaint was planned, in order to obtain 50 chemically and/or physically restrained and 50 unrestrained patients. The study was performed in summer 2004 in a community, inner-city, level 1 emergency department with 45,000 visits per year. The level of patient agitation was measured using the Agitated Behavior Scale (ABS) and the Richmond Agitation-Sedation Scale (RASS) upon arrival and every 30 minutes over a 3-hour period. The inclusion criteria allowed entry of any patient who presented to the emergency department with a psychiatric complaint thought to be unrelated to physical illness. Patients who were restrained for nonbehavioral reasons or were medically unstable were excluded. Results: 101 patients were enrolled in the study. Of that total, 53 patients were not restrained, 47 patients were restrained, and 1 had incomplete data. There were no differences in gender, race, or age between the 2 groups. Upon arrival, 2 of the 47 restrained patients were rated severely agitated on the ABS, and 13 of 47 restrained patients were rated combative on the RASS. There was a statistical difference (p = .01) between the groups on both scales from time 0 to time 90 minutes. Scores on the agitation scales decreased over time in both groups. One patient in the unrestrained group became unarousable during treatment. Conclusion: This study demonstrated that patients who were restrained were more agitated than those who were not, and that agitation levels in both groups decreased over time. Some restrained patients did not meet combativeness or severe agitation

  7. Nearby, low-mass Planck clusters and the extension of scaling relations

    NASA Astrophysics Data System (ADS)

    Sun, Ming

    2013-10-01

    In the last several years, tremendous progress from the SZ surveys like Planck, SPT and ACT has made the SZ observation an important means for the studies of the ICM and cluster cosmology. While ground SZ telescopes are generally only sensitive to and are focused on rich clusters, Planck can detect poor clusters and even groups at z<0.05, as shown by the 2013 Planck cluster catalog. We select a sample of 26 poor clusters and groups detected by Planck at z<0.05. Six of them have no XMM or Chandra data and two of these six clusters are not even detected by RASS. We propose to observe these six systems with XMM to have a complete sample of Planck low-mass systems to extend the Planck scaling relations to a mass scale that is 4 - 5 times lower than what was achieved before.

  8. Boundary layer dynamics in a small shallow valley near the Alps (ScaleX campaign)

    NASA Astrophysics Data System (ADS)

    Zeeman, Matthias; Adler, Bianca; Banerjee, Tirtha; Brugger, Peter; De Roo, Frederik; Emeis, Stefan; Matthias, Mauder; Schäfer, Klaus; Wolf, Benjamin; Schmid, Hans Peter

    2016-04-01

    Mountainous terrain presents a challenge for the experimental determination of exchange processes. The Alps modulate synoptic flow and introduce circulation systems that reach into the forelands. In addition, the Prealpine landscape is heterogeneous itself, dominated by patches of forestry on the slopes and agriculture on flat areas. That combined complexity is manifest in atmospheric circulations at multiple scales. We investigated the diurnal evolution of the atmospheric boundary layer with focus on the connection between surface exchange processes and atmospheric circulations at the regional to local scale. The experiment is part of an ongoing, multi-disciplinary study on scale dependencies in the distribution of energy and matter (ScaleX) at the TERENO Prealpine observatory in Germany. We observed vertical profiles of wind speed and air temperature up to 1000 m above ground during June and July 2015 in a small shallow Prealpine valley in Bavaria, Germany. Wind vectors and temperature were observed using ground-based optical, acoustic and radiometric remote sensing techniques. Spatial patterns in wind speed and direction were determined using eddy covariance systems, 3D Doppler LIDAR and acoustic sounding (RASS). Three Doppler LIDAR units were configured to form a virtual tower at the beam intersect. Temperature profiles were observed using radio-acoustic sounding (RASS) and a microwave radiometer (HATPRO). The temporal and spatial resolutions of the resulting vertical profiles were between 1-15 min and between 3-100 m, respectively. The observed variability in wind vectors and stability shows evidence of the link between flow phenomena at micro- to mesoscale and local biosphere-atmosphere exchange processes. We present first results and discuss the predictability of the impact of local and regional (alpine) landscape features on flow and structures in the atmospheric boundary layer.

  9. Large-scale structure studies with the unresolved CXB - Challenges from XBOOTES

    NASA Astrophysics Data System (ADS)

    Kolodzig, Alexander; Gilfanov, Marat; Hütsi, Gert; Sunyaev, Rashid

    2015-08-01

    The scientific significance of large-scale structure (LSS) studies with X-ray surveys can be greatly enhanced by analysis of the surface brightness fluctuations of the unresolved cosmic X-ray background (CXB). It enables us to study the clustering properties of source populations, which are otherwise inaccessible with clustering studies of resolved sources of current X-ray surveys due to the lack of deep-wide surveys and selection effects.We have conducted the most accurate measurement to date of the brightness fluctuations of the unresolved CXB in the 0.5-2.0keV band for angular scales of <~17', using XBOOTES, the currently largest continuous Chandra survey (~9 deg2).We find that on small angular scales (<~2') the observed power spectrum of the brightness fluctuations is broadly consistent with the conventional AGN clustering model, although with a ~30% deviation. This deviation demonstrates the current poor knowledge of the clustering properties of AGN within their dark matter halo (DMH). We provide possible explanations for this deviation.For angular scales of >~2' we measure a significant excess with up to an order of magnitude difference in comparison to the standard AGN clustering model. We demonstrate that an instrumental origin can be excluded but also that it can neither be explained with any known X-ray source population based on its clustering strength and the shape of its energy spectrum. We speculate that the excess is caused by more than one type of source and that its dominant source appears to have extragalactic origin.Finally, we make predictions on how the eROSITA all-sky survey (eRASS) will be able to advance the studies of the unresolved CXB.

  10. Communication Needs of Critical Care Patients Who Are Voiceless.

    PubMed

    Koszalinski, Rebecca S; Tappen, Ruth M; Hickman, Candice; Melhuish, Tracey

    2016-08-01

    Voice is crucial for communication in all healthcare settings. Evidence-based care highlights the need for clear communication. Clear communication methods must be applied when caring for special populations in order to assess pain effectively. Communication efforts also should be offered to patients who are in end-of-life care and would like to make independent decisions. A computer communication application was offered to patients in intensive care/critical care units in three hospitals in South Florida. Inclusion criteria included the age of 18 years or older, Richmond Agitation Sedation Scale between -1 and +1, ability to read and write English, and willingness to use the computer application. Exclusion criteria included inability to read and write English, agitation as defined by the Richmond Agitation Sedation Scale, and any patient on infection isolation protocol. Four qualitative themes were revealed, which directly relate to two published evidence-based guidelines. These are the End of Life Care and Decision Making Evidence-Based Care Guidelines and the Pain Assessment in Special Populations Guidelines. This knowledge is important for developing effective patient-healthcare provider communication. PMID:27315366

  11. Closed-loop control for cardiopulmonary management and intensive care unit sedation using digital imaging

    NASA Astrophysics Data System (ADS)

    Gholami, Behnood

    assessed by expert and non-expert human examiners. Next, we consider facial expression recognition using an unsupervised learning framework. We show that different facial expressions reside on distinct subspaces if the manifold is unfolded. In particular, semi-definite embedding is used to reduce the dimensionality and unfold the manifold of facial images. Next, generalized principal component analysis is used to fit a series of subspaces to the data points and associate each data point to a subspace. Data points that belong to the same subspace are shown to belong to the same facial expression. In clinical intensive care unit practice sedative/analgesic agents are titrated to achieve a specific level of sedation. The level of sedation is currently based on clinical scoring systems. Examples include the motor activity assessment scale (MAAS), the Richmond agitation-sedation scale (RASS), and the modified Ramsay sedation scale (MRSS). In general, the goal of the clinician is to find the drug dose that maintains the patient at a sedation score corresponding to a moderately sedated state. In this research, we use pharmacokinetic and pharmacodynamic modeling to find an optimal drug dosing control policy to drive the patient to a desired MRSS score. Atrial fibrillation, a cardiac arrhythmia characterized by unsynchronized electrical activity in the atrial chambers of the heart, is a rapidly growing problem in modern societies. One treatment, referred to as catheter ablation, targets specific parts of the left atrium for radio frequency ablation using an intracardiac catheter. As a first step towards the general solution to the computer-assisted segmentation of the left atrial wall, we use shape learning and shape-based image segmentation to identify the endocardial wall of the left atrium in the delayed-enhancement magnetic resonance images. (Abstract shortened by UMI.)

  12. Impaired Arousal in Older Adults is Associated with Prolonged Hospital Stay and Discharge to Skilled Nursing Facility

    PubMed Central

    Yevchak, Andrea M.; Han, Jin Ho; Doherty, Kelly; Archambault, Elizabeth G.; Kelly, Brittany; Chandrasekhar, Rameela; Ely, E. Wesley; Rudolph, James L.

    2015-01-01

    Background Poor cognitive function is associated with negative consequences across settings of care, but research instruments are arduous for routine clinical implementation. This study examined the association between impaired arousal, as measured using an ultra-brief screen, and risk of two adverse clinical outcomes: hospital length of stay and discharge to a skilled nursing facility (SNF). Design, Setting, & Participants A secondary data analysis was conducted using two separate groups of medical ward patients: a VA medical center in the northeast (N=1,487, between 2010 and 2012) 60 years and older and a large tertiary care, university-based medical center (N=669, between 2007 and 2013) 65 years and older in the southeastern United States. Measurements The impact of impaired arousal, defined by the Richmond Agitation Sedation Scale (RASS) as anything other than “awake and alert,” was determined using Cox Proportional Hazard Regression for time to hospital discharge and logistic regression for discharge to a SNF. Hazard ratios (HR) and odds ratios (OR) with their 95% confidence intervals (CI) are reported, respectively. Both models were adjusted age, sex, and dementia. Results The 2,156 total patients included in these groups had a mean age of 76 years, of whom 16.4% in group one and 28.5% in group two had impaired arousal. In the first group, patients with normal arousal spent an average of 5.9 days (SD 6.2) in the hospital, while those with impaired arousal spent 8.5 days (9.2). On any given day, patients with impaired arousal had 27% lower chance of being discharged (adjusted hazard ratio 0.73 (95%CI: 0.63 – 0.84). In the second group, individuals with normal arousal spent 3.8 (4.1) days in the hospital compared to 4.7 (4.6) for those with impaired arousal; indicating a 21% lower chance of being discharged [adjusted HR 0.79 (95%CI: 0.66 – 0.95). With regard to risk of discharge to SNF, those with impaired arousal in group 1 had a 65% higher risk than

  13. Maslowian Scale.

    ERIC Educational Resources Information Center

    Falk, C.; And Others

    The development of the Maslowian Scale, a method of revealing a picture of one's needs and concerns based on Abraham Maslow's levels of self-actualization, is described. This paper also explains how the scale is supported by the theories of L. Kohlberg, C. Rogers, and T. Rusk. After a literature search, a list of statements was generated…

  14. Activity Scale.

    ERIC Educational Resources Information Center

    Kerpelman, Larry C.; Weiner, Michael J.

    This twenty-four item scale assesses students' actual and desired political-social activism in terms of physical participation, communication activities, and information-gathering activities. About ten minutes are required to complete the instrument. The scale is divided into two subscales. The first twelve items (ACT-A) question respondents on…

  15. Scaling Rules!

    NASA Astrophysics Data System (ADS)

    Malkinson, Dan; Wittenberg, Lea

    2015-04-01

    Scaling is a fundamental issue in any spatially or temporally hierarchical system. Defining domains and identifying the boundaries of the hierarchical levels may be a challenging task. Hierarchical systems may be broadly classified to two categories: compartmental and continuous ones. Examples of compartmental systems include: governments, companies, computerized networks, biological taxonomy and others. In such systems the compartments, and hence the various levels and their constituents are easily delineated. In contrast, in continuous systems, such as geomorphological, ecological or climatological ones, detecting the boundaries of the various levels may be difficult. We propose that in continuous hierarchical systems a transition from one functional scale to another is associated with increased system variance. Crossing from a domain of one scale to the domain of another is associated with a transition or substitution of the dominant drivers operating in the system. Accordingly we suggest that crossing this boundary is characterized by increased variance, or a "variance leap", which stabilizes, until crossing to the next domain or hierarchy level. To assess this we compiled sediment yield data from studies conducted at various spatial scales and from different environments. The studies were partitioned to ones conducted in undisturbed environments, and those conducted in disturbed environments, specifically by wildfires. The studies were conducted in plots as small as 1 m2, and watersheds larger than 555000 ha. Regressing sediment yield against plot size, and incrementally calculating the variance in the systems, enabled us to detect domains where variance values were exceedingly high. We propose that at these domains scale-crossing occurs, and the systems transition from one hierarchical level to another. Moreover, the degree of the "variance leaps" characterizes the degree of connectivity among the scales.

  16. RASS SOT Webinar - Nonmonotonic Dose Response Curves (NMDRCs) Common after Estrogen or Androgen Signaling Pathway Disruption

    EPA Science Inventory

    The presentation provides the listening and viewing audience with Dr Gray's scientific information on the relevance of nonmonotonic dose response curves to the risk assessment of estrogenic and androgenic chemicals

  17. Scaling satan.

    PubMed

    Wilson, K M; Huff, J L

    2001-05-01

    The influence on social behavior of beliefs in Satan and the nature of evil has received little empirical study. Elaine Pagels (1995) in her book, The Origin of Satan, argued that Christians' intolerance toward others is due to their belief in an active Satan. In this study, more than 200 college undergraduates completed the Manitoba Prejudice Scale and the Attitudes Toward Homosexuals Scale (B. Altemeyer, 1988), as well as the Belief in an Active Satan Scale, developed by the authors. The Belief in an Active Satan Scale demonstrated good internal consistency and temporal stability. Correlational analyses revealed that for the female participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men and intolerance toward ethnic minorities. For the male participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men but was not significantly related to intolerance toward ethnic minorities. Results of this research showed that it is possible to meaningfully measure belief in an active Satan and that such beliefs may encourage intolerance toward others. PMID:11577971

  18. Nuclear scales

    SciTech Connect

    Friar, J.L.

    1998-12-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the {pi}-{gamma} force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.

  19. Multidimensional scaling

    PubMed Central

    Papesh, Megan H.; Goldinger, Stephen D.

    2012-01-01

    The concept of similarity, or a sense of "sameness" among things, is pivotal to theories in the cognitive sciences and beyond. Similarity, however, is a difficult thing to measure. Multidimensional scaling (MDS) is a tool by which researchers can obtain quantitative estimates of similarity among groups of items. More formally, MDS refers to a set of statistical techniques that are used to reduce the complexity of a data set, permitting visual appreciation of the underlying relational structures contained therein. The current paper provides an overview of MDS. We discuss key aspects of performing this technique, such as methods that can be used to collect similarity estimates, analytic techniques for treating proximity data, and various concerns regarding interpretation of the MDS output. MDS analyses of two novel data sets are also included, highlighting in step-by-step fashion how MDS is performed, and key issues that may arise during analysis. PMID:23359318

  20. Scaling: An Items Module

    ERIC Educational Resources Information Center

    Tong, Ye; Kolen, Michael J.

    2010-01-01

    "Scaling" is the process of constructing a score scale that associates numbers or other ordered indicators with the performance of examinees. Scaling typically is conducted to aid users in interpreting test results. This module describes different types of raw scores and scale scores, illustrates how to incorporate various sources of information…

  1. Occupational Cohort Time Scales

    PubMed Central

    Roth, H. Daniel

    2015-01-01

    Purpose: This study explores how highly correlated time variables (occupational cohort time scales) contribute to confounding and ambiguity of interpretation. Methods: Occupational cohort time scales were identified and organized through simple equations of three time scales (relational triads) and the connections between these triads (time scale web). The behavior of the time scales was examined when constraints were imposed on variable ranges and interrelationships. Results: Constraints on a time scale in a triad create high correlations between the other two time scales. These correlations combine with the connections between relational triads to produce association paths. High correlation between time scales leads to ambiguity of interpretation. Conclusions: Understanding the properties of occupational cohort time scales, their relational triads, and the time scale web is helpful in understanding the origins of otherwise obscure confounding bias and ambiguity of interpretation. PMID:25647318

  2. On Quantitative Rorschach Scales.

    ERIC Educational Resources Information Center

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  3. Small Scale Organic Techniques

    ERIC Educational Resources Information Center

    Horak, V.; Crist, DeLanson R.

    1975-01-01

    Discusses the advantages of using small scale experimentation in the undergraduate organic chemistry laboratory. Describes small scale filtration techniques as an example of a semi-micro method applied to small quantities of material. (MLH)

  4. Cross-scale morphology

    USGS Publications Warehouse

    Allen, Craig R.; Holling, Crawford S.; Garmestani, Ahjond S.

    2013-01-01

    The scaling of physical, biological, ecological and social phenomena is a major focus of efforts to develop simple representations of complex systems. Much of the attention has been on discovering universal scaling laws that emerge from simple physical and geometric processes. However, there are regular patterns of departures both from those scaling laws and from continuous distributions of attributes of systems. Those departures often demonstrate the development of self-organized interactions between living systems and physical processes over narrower ranges of scale.

  5. Extreme Scale Visual Analytics

    SciTech Connect

    Wong, Pak C.; Shen, Han-Wei; Pascucci, Valerio

    2012-05-08

    Extreme-scale visual analytics (VA) is about applying VA to extreme-scale data. The articles in this special issue examine advances related to extreme-scale VA problems, their analytical and computational challenges, and their real-world applications.

  6. The Positivity Scale

    ERIC Educational Resources Information Center

    Caprara, Gian Vittorio; Alessandri, Guido; Eisenberg, Nancy; Kupfer, A.; Steca, Patrizia; Caprara, Maria Giovanna; Yamaguchi, Susumu; Fukuzawa, Ai; Abela, John

    2012-01-01

    Five studies document the validity of a new 8-item scale designed to measure "positivity," defined as the tendency to view life and experiences with a positive outlook. In the first study (N = 372), the psychometric properties of Positivity Scale (P Scale) were examined in accordance with classical test theory using a large number of college…

  7. Reading Graduated Scales.

    ERIC Educational Resources Information Center

    Hall, Lucien T., Jr.

    1982-01-01

    Ways of teaching students to read scales are presented as process instructions that are probably overlooked or taken for granted by most instructors. Scales on such devices as thermometers, rulers, spring scales, speedometers, and thirty-meter tape are discussed. (MP)

  8. Civilian PTSD Scales

    ERIC Educational Resources Information Center

    Shapinsky, Alicia C.; Rapport, Lisa J.; Henderson, Melinda J.; Axelrod, Bradley N.

    2005-01-01

    Strong associations between civilian posttraumatic stress disorder (PTSD) scales and measures of general psychological distress suggest that the scales are nonspecific to PTSD. Three common PTSD scales were administered to 122 undergraduates who had experienced an emotionally salient, nontraumatic event: a college examination. Results indicated…

  9. Manual of Scaling Methods

    NASA Technical Reports Server (NTRS)

    Bond, Thomas H. (Technical Monitor); Anderson, David N.

    2004-01-01

    This manual reviews the derivation of the similitude relationships believed to be important to ice accretion and examines ice-accretion data to evaluate their importance. Both size scaling and test-condition scaling methods employing the resulting similarity parameters are described, and experimental icing tests performed to evaluate scaling methods are reviewed with results. The material included applies primarily to unprotected, unswept geometries, but some discussion of how to approach other situations is included as well. The studies given here and scaling methods considered are applicable only to Appendix-C icing conditions. Nearly all of the experimental results presented have been obtained in sea-level tunnels. Recommendations are given regarding which scaling methods to use for both size scaling and test-condition scaling, and icing test results are described to support those recommendations. Facility limitations and size-scaling restrictions are discussed. Finally, appendices summarize the air, water and ice properties used in NASA scaling studies, give expressions for each of the similarity parameters used and provide sample calculations for the size-scaling and test-condition scaling methods advocated.

  10. Scale and scaling in agronomy and environmental sciences

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Scale is of paramount importance in environmental studies, engineering, and design. The unique course covers the following topics: scale and scaling, methods and theories, scaling in soils and other porous media, scaling in plants and crops; scaling in landscapes and watersheds, and scaling in agro...

  11. Salzburger State Reactance Scale (SSR Scale)

    PubMed Central

    2015-01-01

    Abstract. This paper describes the construction and empirical evaluation of an instrument for measuring state reactance, the Salzburger State Reactance (SSR) Scale. The results of a confirmatory factor analysis supported a hypothesized three-factor structure: experience of reactance, aggressive behavioral intentions, and negative attitudes. Correlations with divergent and convergent measures support the validity of this structure. The SSR Subscales were strongly related to the other state reactance measures. Moreover, the SSR Subscales showed modest positive correlations with trait measures of reactance. The SSR Subscales correlated only slightly or not at all with neighboring constructs (e.g., autonomy, experience of control). The only exception was fairness scales, which showed moderate correlations with the SSR Subscales. Furthermore, a retest analysis confirmed the temporal stability of the scale. Suggestions for further validation of this questionnaire are discussed. PMID:27453806

  12. The positivity scale.

    PubMed

    Caprara, Gian Vittorio; Alessandri, Guido; Eisenberg, Nancy; Kupfer, A; Steca, Patrizia; Caprara, Maria Giovanna; Yamaguchi, Susumu; Fukuzawa, Ai; Abela, John

    2012-09-01

    Five studies document the validity of a new 8-item scale designed to measure positivity, defined as the tendency to view life and experiences with a positive outlook. In the first study (N = 372), the psychometric properties of Positivity Scale (P Scale) were examined in accordance with classical test theory using a large number of college participants. In Study 2, the unidimensionality of the P Scale was corroborated with confirmatory factor analysis in 2 independent samples (N₁ = 322; N₂ = 457). In Study 3, P Scale invariance across sexes and its relations with self-esteem, life satisfaction, optimism, positive negative affect, depression, and the Big Five provided further evidence of the internal and construct validity of the new measure in a large community sample (N = 3,589). In Study 4, test-retest reliability of the P Scale was found in a sample of college students (N = 262) who were readministered the scale after 5 weeks. In Study 5, measurement invariance and construct validity of P Scale were further supported across samples in different countries and cultures, including Italy (N = 689), the United States (N = 1,187), Japan (N = 281), and Spain (N = 302). Psychometric findings across diverse cultural context attest to the robustness of the P Scale and to positivity as a basic disposition. PMID:22250591

  13. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  14. Magnetron injection gun scaling

    SciTech Connect

    Lawson, W.

    1988-04-01

    Existing analytic design equations for magnetron injection guns (MIG's) are approximated to obtain a set of scaling laws. The constraints are chosen to examine the maximum peak power capabilities of MIG's. The scaling laws are compared with exact solutions of the design equations and are supported by MIG simulations.

  15. The Family Constellation Scale.

    ERIC Educational Resources Information Center

    Lemire, David

    The Family Constellation Scale (FC Scale) is an instrument that assesses perceived birth order in families. It can be used in counseling to help initiate conversations about various traits and assumptions that tend to characterize first-born, middle-born children, youngest-born, and only children. It provides both counselors and clients insights…

  16. INL Laboratory Scale Atomizer

    SciTech Connect

    C.R. Clark; G.C. Knighton; R.S. Fielding; N.P. Hallinan

    2010-01-01

    A laboratory scale atomizer has been built at the Idaho National Laboratory. This has proven useful for laboratory scale tests and has been used to fabricate fuel used in the RERTR miniplate experiments. This instrument evolved over time with various improvements being made ‘on the fly’ in a trial and error process.

  17. A Scale for Sexism

    ERIC Educational Resources Information Center

    Pingree, Suzanne; And Others

    1976-01-01

    Defines the consciousness scale as a measurement technique which divides media protrayals of women into five conceptually-derived categories that can be placed in ordinal relationships with one another. Suggests that such a scale may be useful as a tool for analyzing mass media content. (MH)

  18. Teaching Satisfaction Scale

    ERIC Educational Resources Information Center

    Ho, Chung-Lim; Au, Wing-Tung

    2006-01-01

    The present study proposes a teaching satisfaction measure and examines the validity of its scores. The measure is based on the Life Satisfaction Scale (LSS). Scores on the five-item Teaching Satisfaction Scale (TSS) were validated on a sample of 202 primary and secondary school teachers and favorable psychometric properties were found. As…

  19. Thoughts on Scale

    ERIC Educational Resources Information Center

    Schoenfeld, Alan H.

    2015-01-01

    This essay reflects on the challenges of thinking about scale--of making sense of phenomena such as continuous professional development (CPD) at the system level, while holding on to detail at the finer grain size(s) of implementation. The stimuli for my reflections are three diverse studies of attempts at scale--an attempt to use ideas related to…

  20. The Fatherhood Scale

    ERIC Educational Resources Information Center

    Dick, Gary L.

    2004-01-01

    This article reports on the initial validation of the Fatherhood Scale (FS), a 64-item instrument designed to measure the type of relationship a male adult had with his father while growing up. The FS was validated using a convenience sample of 311 males. The assessment packet contained a demographic form, the Conflict Tactics Scale (2),…

  1. Scales and erosion

    Technology Transfer Automated Retrieval System (TEKTRAN)

    There is a need to develop scale explicit understanding of erosion to overcome existing conceptual and methodological flaws in our modelling methods currently applied to understand the process of erosion, transport and deposition at the catchment scale. These models need to be based on a sound under...

  2. New scale factor measure

    NASA Astrophysics Data System (ADS)

    Bousso, Raphael

    2012-07-01

    The computation of probabilities in an eternally inflating universe requires a regulator or “measure.” The scale factor time measure truncates the Universe when a congruence of timelike geodesics has expanded by a fixed volume factor. This definition breaks down if the generating congruence is contracting—a serious limitation that excludes from consideration gravitationally bound regions such as our own. Here we propose a closely related regulator which is well defined in the entire spacetime. The new scale factor cutoff restricts to events with a scale factor below a given value. Since the scale factor vanishes at caustics and crunches, this cutoff always includes an infinite number of disconnected future regions. We show that this does not lead to divergences. The resulting measure combines desirable features of the old scale factor cutoff and of the light-cone time cutoff, while eliminating some of the disadvantages of each.

  3. The inflationary energy scale

    NASA Astrophysics Data System (ADS)

    Liddle, Andrew R.

    1994-01-01

    The energy scale of inflation is of much interest, as it suggests the scale of grand unified physics, governs whether cosmological events such as topological defect formation can occur after inflation, and also determines the amplitude of gravitational waves which may be detectable using interferometers. The COBE results are used to limit the energy scale of inflation at the time large scale perturbations were imprinted. An exact dynamical treatment based on the Hamilton-Jacobi equations is then used to translate this into limits on the energy scale at the end of inflation. General constraints are given, and then tighter constraints based on physically motivated assumptions regarding the allowed forms of density perturbation and gravitational wave spectra. These are also compared with the values of familiar models.

  4. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  5. Allometric Scaling in Biology

    NASA Astrophysics Data System (ADS)

    Banavar, Jayanth

    2009-03-01

    The unity of life is expressed not only in the universal basis of inheritance and energetics at the molecular level, but also in the pervasive scaling of traits with body size at the whole-organism level. More than 75 years ago, Kleiber and Brody and Proctor independently showed that the metabolic rates, B, of mammals and birds scale as the three-quarter power of their mass, M. Subsequent studies showed that most biological rates and times scale as M-1/4 and M^1/4 respectively, and that these so called quarter-power scaling relations hold for a variety of organisms, from unicellular prokaryotes and eukaryotes to trees and mammals. The wide applicability of Kleiber's law, across the 22 orders of magnitude of body mass from minute bacteria to giant whales and sequoias, raises the hope that there is some simple general explanation that underlies the incredible diversity of form and function. We will present a general theoretical framework for understanding the relationship between metabolic rate, B, and body mass, M. We show how the pervasive quarter-power biological scaling relations arise naturally from optimal directed resource supply systems. This framework robustly predicts that: 1) whole organism power and resource supply rate, B, scale as M^3/4; 2) most other rates, such as heart rate and maximal population growth rate scale as M-1/4; 3) most biological times, such as blood circulation time and lifespan, scale as M^1/4; and 4) the average velocity of flow through the network, v, such as the speed of blood and oxygen delivery, scales as M^1/12. Our framework is valid even when there is no underlying network. Our theory is applicable to unicellular organisms as well as to large animals and plants. This work was carried out in collaboration with Amos Maritan along with Jim Brown, John Damuth, Melanie Moses, Andrea Rinaldo, and Geoff West.

  6. Scaling the Universe

    NASA Astrophysics Data System (ADS)

    Frankel, Norman E.

    2014-04-01

    A model is presented for the origin of the large scale structure of the universe and their Mass-Radius scaling law. The physics is conventional, orthodox, but it is used to fashion a highly unorthodox model of the origin of the galaxies, their groups, clusters, super-clusters, and great walls. The scaling law fits the observational results and the model offers new suggestions and predictions. These include a largest, a supreme, cosmic structure, and possible implications for the recently observed pressing cosmological anomalies.

  7. Sulfate scale dissolution

    SciTech Connect

    Morris, R.L.; Paul, J.M.

    1992-01-28

    This patent describes a method for removing barium sulfate scale. It comprises contacting the scale with an aqueous solution having a pH of about 8 to about 14 and consisting essentially of a chelating agent comprising a polyaminopolycarboxylic acid or salt of such an acid in a concentration of 0.1 to 1.0 M, and anions of a monocarboxylic acid selected form mercaptoacetic acid, hydroxyacetic acid, aminoacetic acid, or salicyclic acid in a concentration of 0.1 to 1.0 M and which is soluble in the solution under the selected pH conditions, to dissolve the scale.

  8. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F., Jr.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  9. Digital scale converter

    DOEpatents

    Upton, Richard G.

    1978-01-01

    A digital scale converter is provided for binary coded decimal (BCD) conversion. The converter may be programmed to convert a BCD value of a first scale to the equivalent value of a second scale according to a known ratio. The value to be converted is loaded into a first BCD counter and counted down to zero while a second BCD counter registers counts from zero or an offset value depending upon the conversion. Programmable rate multipliers are used to generate pulses at selected rates to the counters for the proper conversion ratio. The value present in the second counter at the time the first counter is counted to the zero count is the equivalent value of the second scale. This value may be read out and displayed on a conventional seven-segment digital display.

  10. Impact crater scaling laws

    NASA Technical Reports Server (NTRS)

    Holsapple, K. A.

    1987-01-01

    Impact craters are numerous on planetary bodies and furnish important information about the composition and past histories of those bodies. The interpretation of that information requires knowledge about the fundamental aspects of impact cratering mechanics. Since the typical conditions of impacts are at a size scale and velocity far in excess of experimental capabilities, direct simulations are precluded. Therefore, one must rely on extrapolation from experiments of relatively slow impacts of very small bodies, using physically based scaling laws, or must study the actual cases of interest using numerical code solutions of the fundamental physical laws that govern these processes. A progress report is presented on research on impact cratering scaling laws, on numerical studies that were designed to investigate those laws, and on various applications of the scaling laws developed by the author and his colleagues. These applications are briefly reviewed.

  11. Reconsidering earthquake scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J.; Wech, A.; Creager, K.; Obara, K.; Agnew, D.

    2016-06-01

    The relationship (scaling) between scalar moment, M0, and duration, T, potentially provides key constraints on the physics governing fault slip. The prevailing interpretation of M0-T observations proposes different scaling for fast (earthquakes) and slow (mostly aseismic) slip populations and thus fundamentally different driving mechanisms. We show that a single model of slip events within bounded slip zones may explain nearly all fast and slow slip M0-T observations, and both slip populations have a change in scaling, where the slip area growth changes from 2-D when too small to sense the boundaries to 1-D when large enough to be bounded. We present new fast and slow slip M0-T observations that sample the change in scaling in each population, which are consistent with our interpretation. We suggest that a continuous but bimodal distribution of slip modes exists and M0-T observations alone may not imply a fundamental difference between fast and slow slip.

  12. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  13. Apache Scale Model Helicopter

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA Langley Research Centers (LaRC) Electromagnetics Research Branch (ERB) performs antenna radiation pattern measurements on a communications antenna mounted on a 1/7th scale model of a US ARMY Apache Helicopter. The NASA LaRC ERB participates in a government industry, and university sponsored helicopter consortium to advance computational electromagnetics (CEM) code development for antenna radiation pattern predictions. Scale model antenna measurements serve as verification tools and are an integral part of the CEM code development process.

  14. The Improbability scale

    SciTech Connect

    Ritchie, David J.; /Fermilab

    2005-03-01

    The Improbability Scale (IS) is proposed as a way of communicating to the general public the improbability (and by implication, the probability) of events predicted as the result of scientific research. Through the use of the Improbability Scale, the public will be able to evaluate more easily the relative risks of predicted events and draw proper conclusions when asked to support governmental and public policy decisions arising from that research.

  15. Evaluating the transition from dexmedetomidine to clonidine for agitation management in the intensive care unit

    PubMed Central

    Terry, Kimberly; Blum, Rachel; Szumita, Paul

    2015-01-01

    Objectives: Limited literature exists examining the use of enteral clonidine to transition patients from dexmedetomidine for management of agitation. The aim of this study was to evaluate dexmedetomidine discontinuation within 8 h of enteral clonidine administration in addition to the rates of dexmedetomidine re-initiation in patients who failed clonidine transition. Methods: A single-center, retrospective analysis evaluated critically ill adult patients from 1 February 2013 to 28 February 2014, who used dexmedetomidine and clonidine for sedation management. Patients were excluded if they received enteral clonidine for reasons other than sedation management. Secondary aims of the study observed time to dexmedetomidine discontinuation, agitation (Richmond Agitation Sedation Scale) and delirium ratings (Confusion Assessment Method for the intensive care unit), clonidine dose, and enteral clonidine discontinuation. Results: In all, 26 patients were evaluated. Demographics included a mean age of 54.4 (±16.7) years, Acute Physiology and Chronic Health Evaluation II score of 18 (interquartile range = 14–22), and 80.7% of admissions to the cardiac surgery intensive care unit. Dexmedetomidine discontinuation occurred in 17 (65.4%) patients within 8 h of receiving clonidine. The total median clonidine exposure per intensive care unit day was 0.35 mg/ICU day (interquartile range = 0.2–0.5) in patients who discontinued dexmedetomidine within 8 h and 0.5 mg/ICU day (interquartile range = 0.4–1.0) (p = 0.036) in patients who did not. We observed similar Richmond Agitation Sedation Scale and Confusion Assessment Method for the intensive care unit scores and rates of hypotension. Unintentional use of clonidine beyond ICU and hospital stay was observed in 54% and 23% of patients, respectively. Conclusion: Enteral clonidine may be an effective and safe alternative to transition patients off of dexmedetomidine for ongoing sedation management

  16. Development of scale inhibitors

    SciTech Connect

    Gill, J.S.

    1996-12-01

    During the last fifty years, scale inhibition has gone from an art to a science. Scale inhibition has changed from simple pH adjustment to the use of optimized dose of designer polymers from multiple monomers. The water-treatment industry faces many challenges due to the need to conserve water, availability of only low quality water, increasing environmental regulations of the water discharge, and concern for human safety when using acid. Natural materials such as starch, lignin, tannin, etc., have been replaced with hydrolytically stable organic phosphates and synthetic polymers. Most progress in scale inhibition has come from the use of synergistic mixtures and copolymerizing different functionalities to achieve specific goals. Development of scale inhibitors requires an understanding of the mechanism of crystal growth and its inhibition. This paper discusses the historic perspective of scale inhibition and the development of new inhibitors based on the understanding of the mechanism of crystal growth and the use of powerful tools like molecular modeling to visualize crystal-inhibitor interactions.

  17. Atomic Scale Plasmonic Switch.

    PubMed

    Emboras, Alexandros; Niegemann, Jens; Ma, Ping; Haffner, Christian; Pedersen, Andreas; Luisier, Mathieu; Hafner, Christian; Schimmel, Thomas; Leuthold, Juerg

    2016-01-13

    The atom sets an ultimate scaling limit to Moore's law in the electronics industry. While electronics research already explores atomic scales devices, photonics research still deals with devices at the micrometer scale. Here we demonstrate that photonic scaling, similar to electronics, is only limited by the atom. More precisely, we introduce an electrically controlled plasmonic switch operating at the atomic scale. The switch allows for fast and reproducible switching by means of the relocation of an individual or, at most, a few atoms in a plasmonic cavity. Depending on the location of the atom either of two distinct plasmonic cavity resonance states are supported. Experimental results show reversible digital optical switching with an extinction ratio of 9.2 dB and operation at room temperature up to MHz with femtojoule (fJ) power consumption for a single switch operation. This demonstration of an integrated quantum device allowing to control photons at the atomic level opens intriguing perspectives for a fully integrated and highly scalable chip platform, a platform where optics, electronics, and memory may be controlled at the single-atom level. PMID:26670551

  18. Universities Scale Like Cities

    PubMed Central

    van Raan, Anthony F. J.

    2013-01-01

    Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the ‘gross university income’ in terms of total number of citations over ‘size’ in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities -the top-100 European universities- we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment. PMID:23544062

  19. Full Scale Tunnel model

    NASA Technical Reports Server (NTRS)

    1929-01-01

    Interior view of Full-Scale Tunnel (FST) model. (Small human figures have been added for scale.) On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel . 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow.

  20. Scaling of Thermoacoustic Refrigerators

    NASA Astrophysics Data System (ADS)

    Li, Y.; Zeegers, J. C. H.; ter Brake, H. J. M.

    2008-03-01

    The possibility of scaling-down thermoacoustic refrigerators is theoretically investigated. Standing-wave systems are considered as well as traveling-wave. In the former case, a reference system is taken that consists of a resonator tube (50 cm) with a closed end and a PVC stack (length 5 cm). Helium is used at a mean pressure of 10 bar and an amplitude of 1 bar. The resulting operating frequency is 1 kHz. The variation of the performance of the refrigerator when scaled down in size is computed under the prerequisites that the temperature drop over the stack or the energy flux or its density are fixed. The analytical results show that there is a limitation in scaling-down a standing-wave thermoacoustic refrigerator due to heat conduction. Similar scaling trends are considered in traveling-wave refrigerators. The traveling-wave reference system consists of a feedback inertance tube of 0.567 m long, inside diameter 78 mm, a compliance volume of 2830 cm3 and a 24 cm thermal buffer tube. The regenerator is sandwiched between two heat exchangers. The system is operated at 125 Hz and filled with 30 bar helium gas. Again, the thermal conductance forms a practical limitation in down-scaling.

  1. No-scale inflation

    NASA Astrophysics Data System (ADS)

    Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.

    2016-05-01

    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on R+{R}2 gravity, with a tilted spectrum of scalar perturbations: {n}s∼ 0.96, and small values of the tensor-to-scalar perturbation ratio r\\lt 0.1, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  2. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  3. Scales of rock permeability

    NASA Astrophysics Data System (ADS)

    Guéguen, Y.; Gavrilenko, P.; Le Ravalec, M.

    1996-05-01

    Permeability is a transport property which is currently measured in Darcy units. Although this unit is very convenient for most purposes, its use prevents from recognizing that permeability has units of length squared. Physically, the square root of permeability can thus be seen as a characteristic length or a characteristic pore size. At the laboratory scale, the identification of this characteristic length is a good example of how experimental measurements and theoretical modelling can be integrated. Three distinct identifications are of current use, relying on three different techniques: image analysis of thin sections, mercury porosimetry and nitrogen adsorption. In each case, one or several theoretical models allow us to derive permeability from the experimental data (equivalent channel models, statistical models, effective media models, percolation and network models). Permeability varies with pressure and temperature and this is a decisive point for any extrapolation to crustal conditions. As far as pressure is concerned, most of the effect is due to cracks and a model which does not incorporate this fact will miss its goal. Temperature induced modifications can be the result of several processes: thermal cracking (due to thermal expansion mismatch and anisotropy, or to fluid pressure build up), and pressure solution are the two main ones. Experimental data on pressure and temperature effects are difficult to obtain but they are urgently needed. Finally, an important issue is: up to which point are these small scale data and models relevant when considering formations at the oil reservoir scale, or at the crust scale? At larger scales the identification of the characteristic scale is also a major goal which is examined.

  4. Angular Scaling In Jets

    SciTech Connect

    Jankowiak, Martin; Larkoski, Andrew J.; /SLAC

    2012-02-17

    We introduce a jet shape observable defined for an ensemble of jets in terms of two-particle angular correlations and a resolution parameter R. This quantity is infrared and collinear safe and can be interpreted as a scaling exponent for the angular distribution of mass inside the jet. For small R it is close to the value 2 as a consequence of the approximately scale invariant QCD dynamics. For large R it is sensitive to non-perturbative effects. We describe the use of this correlation function for tests of QCD, for studying underlying event and pile-up effects, and for tuning Monte Carlo event generators.

  5. Scale invariance in biophysics

    NASA Astrophysics Data System (ADS)

    Stanley, H. Eugene

    2000-06-01

    In this general talk, we offer an overview of some problems of interest to biophysicists, medical physicists, and econophysicists. These include DNA sequences, brain plaques in Alzheimer patients, heartbeat intervals, and time series giving price fluctuations in economics. These problems have the common feature that they exhibit features that appear to be scale invariant. Particularly vexing is the problem that some of these scale invariant phenomena are not stationary-their statistical properties vary from one time interval to the next or form one position to the next. We will discuss methods, such as wavelet methods and multifractal methods, to cope with these problems. .

  6. Fundamentals of Zoological Scaling.

    ERIC Educational Resources Information Center

    Lin, Herbert

    1982-01-01

    The following animal characteristics are considered to determine how properties and characteristics of various systems change with system size (scaling): skeletal weight, speed of running, height and range of jumping, food consumption, heart rate, lifetime, locomotive efficiency, frequency of wing-flapping, and maximum sizes of flying and hovering…

  7. Sensor system scaling issues

    SciTech Connect

    Canavan, G.H.

    1996-07-01

    A model for IR sensor performance is used to compare estimates of sensor cost effectiveness. Although data from aircraft sensors indicate a weaker scaling, their agreement is adequate to support the assessment of the benefits of operating up to the maximum altitude of most current UAVs.

  8. Scale, Composition, and Technology

    ERIC Educational Resources Information Center

    Victor, Peter A.

    2009-01-01

    Scale (gross domestic product), composition (goods and services), and technology (impacts per unit of goods and services) in combination are the proximate determinants in an economy of the resources used, wastes generated, and land transformed. In this article, we examine relationships among these determinants to understand better the contribution…

  9. Scaling up Education Reform

    ERIC Educational Resources Information Center

    Gaffney, Jon D. H.; Richards, Evan; Kustusch, Mary Bridget; Ding, Lin; Beichner, Robert J.

    2008-01-01

    The SCALE-UP (Student-Centered Activities for Large Enrollment for Undergraduate Programs) project was developed to implement reforms designed for small classes into large physics classes. Over 50 schools across the country, ranging from Wake Technical Community College to Massachusetts Institute of Technology (MIT), have adopted it for classes of…

  10. Scaling up Psycholinguistics

    ERIC Educational Resources Information Center

    Smith, Nathaniel J.

    2011-01-01

    This dissertation contains several projects, each addressing different questions with different techniques. In chapter 1, I argue that they are unified thematically by their goal of "scaling up psycholinguistics"; they are all aimed at analyzing large data-sets using tools that reveal patterns to propose and test mechanism-neutral hypotheses about…

  11. Student Descriptor Scale Manual.

    ERIC Educational Resources Information Center

    Goetz, Lori; And Others

    The Student Descriptor Scale (SDS) was developed as a validation measure to determine whether students described and counted by states as "severely handicapped" were, indeed, students with severe disabilities. The SDS addresses nine characteristics: intellectual disability, health impairment, need for toileting assistance, upper torso motor…

  12. Allometric scaling of countries

    NASA Astrophysics Data System (ADS)

    Zhang, Jiang; Yu, Tongkui

    2010-11-01

    As huge complex systems consisting of geographic regions, natural resources, people and economic entities, countries follow the allometric scaling law which is ubiquitous in ecological, and urban systems. We systematically investigated the allometric scaling relationships between a large number of macroscopic properties and geographic (area), demographic (population) and economic (GDP, gross domestic production) sizes of countries respectively. We found that most of the economic, trade, energy consumption, communication related properties have significant super-linear (the exponent is larger than 1) or nearly linear allometric scaling relations with the GDP. Meanwhile, the geographic (arable area, natural resources, etc.), demographic (labor force, military age population, etc.) and transportation-related properties (road length, airports) have significant and sub-linear (the exponent is smaller than 1) allometric scaling relations with area. Several differences of power law relations with respect to the population between countries and cities were pointed out. First, population increases sub-linearly with area in countries. Second, the GDP increases linearly in countries but not super-linearly as in cities. Finally, electricity or oil consumption per capita increases with population faster than cities.

  13. Small Scale Industries.

    ERIC Educational Resources Information Center

    Rural Development Detwork Bulletin, 1977

    1977-01-01

    Innovative programs for the promotion of small-scale enterprise are being conducted by a variety of organizations, including universities, government agencies, international research institutes, and voluntary assistance agencies. Their activities encompass basic extension services, management of cooperatives, community action programs, and…

  14. Small scale membrane mechanics

    PubMed Central

    Rangamani, Padmini; Benjamini, Ayelet; Agrawal, Ashutosh; Smit, Berend; Oster, George

    2014-01-01

    Large scale changes to lipid bilayer shapes are well represented by the Helfrich model. However, there are membrane processes that take place at smaller length scales that this model cannot address. In this work, we present a one-dimensional continuum model that captures the mechanics of the lipid bilayer membrane at the length scale of the lipids themselves. The model is developed using the Cosserat theory of surfaces with lipid orientation, or ‘tilt’, as the fundamental degree of freedom. The Helfrich model can be recovered as a special case when the curvatures are small and the lipid tilt is everywhere zero. We use the tilt model to study local membrane deformations in response to a protein inclusion. Parameter estimates and boundary conditions are obtained from a coarse-grained molecular model using dissipative particle dynamics (DPD) to capture the same phenomenon. The continuum model is able to reproduce the membrane bending, stretch and lipid tilt as seen in the DPD model. The lipid tilt angle relaxes to the bulk tilt angle within 5–6 nm from the protein inclusion. Importantly, for large tilt gradients induced by the proteins, the tilt energy contribution is larger than the bending energy contribution. Thus, the continuum model of tilt accurately captures behaviors at length scales shorter than the membrane thickness. PMID:24081650

  15. Bracken Basic Concept Scale.

    ERIC Educational Resources Information Center

    Naglieri, Jack A.; Bardos, Achilles N.

    1990-01-01

    The Bracken Basic Concept Scale, for use with preschool and primary-aged children, determines a child's school readiness and knowledge of English-language verbal concepts. The instrument measures 258 basic concepts in such categories as comparisons, time, quantity, and letter identification. This paper describes test administration, scoring and…

  16. Scaling Applications in hydrology

    NASA Astrophysics Data System (ADS)

    Gebremichael, Mekonnen

    2010-05-01

    Besides downscaling applications, scaling properties of hydrological fields can be used to address a variety of research questions. In this presentation, we will use scaling properties to address questions related to satellite evapotranspiration algorithms, precipitation-streamflow relationships, and hydrological model calibration. Most of the existing satellite-based evapotranspiration (ET) algorithms have been developed using fine-resolution Landsat TM and ASTER data. However, these algorithms are often applied to coarse-resolution MODIS data. Our results show that applying the satellite-based algorithms, which are developed at ASTER resolution, to MODIS resolution leads to ET estimates that (1) preserve the overall spatial pattern (spatial correlation in excess of 0.90), (2) increase the spatial standard deviation and maximum value, (3) have modest conditional bias: underestimate low ET rates (< 1 mm/day) and overestimate high ET rates; the overestimation is within 20%. The results emphasize the need for exploring alternatives for estimation of ET from MODIS. Understanding the relationship between the scaling properties of precipitation and streamflow is important in a number of applications. We present the results of a detailed river flow fluctuation analysis on daily records from 14 stations in the Flint River basin in Georgia in the United States with focus on effect of watershed area on long memory of river flow fluctuations. The areas of the watersheds draining to the stations range from 22 km2 to 19,606 km2. Results show that large watersheds have more persistent flow fluctuations and stronger long-term (time greater than scale break point) memory than small watersheds while precipitation time series shows weak long-term correlation. We conclude that a watershed acts as a 'filter' for a 'white noise' precipitation with more significant filtering in case of large watersheds. Finally, we compare the scaling properties of simulated and observed spatial soil

  17. ELECTRONIC PULSE SCALING CIRCUITS

    DOEpatents

    Cooke-Yarborough, E.H.

    1958-11-18

    Electronic pulse scaling circults of the klnd comprlsing a serles of bi- stable elements connected ln sequence, usually in the form of a rlng so as to be cycllcally repetitive at the highest scallng factor, are described. The scaling circuit comprises a ring system of bi-stable elements each arranged on turn-off to cause, a succeeding element of the ring to be turned-on, and one being arranged on turn-off to cause a further element of the ring to be turned-on. In addition, separate means are provided for applying a turn-off pulse to all the elements simultaneously, and for resetting the elements to a starting condition at the end of each cycle.

  18. [COMPREHENSIVE GERIATRIC ASSESSMENT SCALES].

    PubMed

    Casado Verdejo, Inés; Postigo Mota, Salvador; Muñoz Bermejo, Laura; Vallejo Villalobos, José Ramón; Arrabal Léon, Nazaret; Pinto Montealegre, Jose Eduardo

    2016-01-01

    The process of comprehensive geriatric assessment is one of the key elements of geriatric care management aimed at the population. it includes evaluating the clinical, functional, mental and social aspects of aging result and/or pathological processes that appear at this stage of the life cycle. For their achievement, as well as other tools, professionals have a large number of validated rating scales specifically designed in the assessment of the different areas or fields. Its use can be very useful, especially for the objectification of evaluation results. The future of research in this area goes through deepening the adequacy of the scales to the characteristics and needs of older people in each care level or place of care. PMID:26996044

  19. An elastica arm scale.

    PubMed

    Bosi, F; Misseroni, D; Dal Corso, F; Bigoni, D

    2014-09-01

    The concept of a 'deformable arm scale' (completely different from a traditional rigid arm balance) is theoretically introduced and experimentally validated. The idea is not intuitive, but is the result of nonlinear equilibrium kinematics of rods inducing configurational forces, so that deflection of the arms becomes necessary for equilibrium, which would be impossible for a rigid system. In particular, the rigid arms of usual scales are replaced by a flexible elastic lamina, free to slide in a frictionless and inclined sliding sleeve, which can reach a unique equilibrium configuration when two vertical dead loads are applied. Prototypes designed to demonstrate the feasibility of the system show a high accuracy in the measurement of load within a certain range of use. Finally, we show that the presented results are strongly related to snaking of confined beams, with implications for locomotion of serpents, plumbing and smart oil drilling. PMID:25197248

  20. Fundamentals of zoological scaling

    NASA Astrophysics Data System (ADS)

    Lin, Herbert

    1982-01-01

    Most introductory physics courses emphasize highly idealized problems with unique well-defined answers. Though many textbooks complement these problems with estimation problems, few books present anything more than an elementary discussion of scaling. This paper presents some fundamentals of scaling in the zoological domain—a domain complex by any standard, but one also well suited to illustrate the power of very simple physical ideas. We consider the following animal characteristics: skeletal weight, speed of running, height and range of jumping, food consumption, heart rate, lifetime, locomotive efficiency, frequency of wing flapping, and maximum sizes of animals that fly and hover. These relationships are compared to zoological data and everyday experience, and match reasonably well.

  1. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  2. Scaling macroscopic aquatic locomotion

    NASA Astrophysics Data System (ADS)

    Gazzola, Mattia; Argentina, Mederic; Mahadevan, Lakshminarayanan

    2014-11-01

    Inertial aquatic swimmers that use undulatory gaits range in length L from a few millimeters to 30 meters, across a wide array of biological taxa. Using elementary hydrodynamic arguments, we uncover a unifying mechanistic principle characterizing their locomotion by deriving a scaling relation that links swimming speed U to body kinematics (tail beat amplitude A and frequency ω) and fluid properties (kinematic viscosity ν). This principle can be simply couched as the power law Re ~ Swα , where Re = UL / ν >> 1 and Sw = ωAL / ν , with α = 4 / 3 for laminar flows, and α = 1 for turbulent flows. Existing data from over 1000 measurements on fish, amphibians, larvae, reptiles, mammals and birds, as well as direct numerical simulations are consistent with our scaling. We interpret our results as the consequence of the convergence of aquatic gaits to the performance limits imposed by hydrodynamics.

  3. Scaling macroscopic aquatic locomotion

    NASA Astrophysics Data System (ADS)

    Gazzola, Mattia; Argentina, Médéric; Mahadevan, L.

    2014-10-01

    Inertial aquatic swimmers that use undulatory gaits range in length L from a few millimetres to 30 metres, across a wide array of biological taxa. Using elementary hydrodynamic arguments, we uncover a unifying mechanistic principle characterizing their locomotion by deriving a scaling relation that links swimming speed U to body kinematics (tail beat amplitude A and frequency ω) and fluid properties (kinematic viscosity ν). This principle can be simply couched as the power law Re ~ Swα, where Re = UL/ν >> 1 and Sw = ωAL/ν, with α = 4/3 for laminar flows, and α = 1 for turbulent flows. Existing data from over 1,000 measurements on fish, amphibians, larvae, reptiles, mammals and birds, as well as direct numerical simulations are consistent with our scaling. We interpret our results as the consequence of the convergence of aquatic gaits to the performance limits imposed by hydrodynamics.

  4. Extreme Scale Visual Analytics

    SciTech Connect

    Steed, Chad A; Potok, Thomas E; Pullum, Laura L; Ramanathan, Arvind; Shipman, Galen M; Thornton, Peter E

    2013-01-01

    Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.

  5. Beyond the Planck Scale

    SciTech Connect

    Giddings, Steven B.

    2009-12-15

    I outline motivations for believing that important quantum gravity effects lie beyond the Planck scale at both higher energies and longer distances and times. These motivations arise in part from the study of ultra-high energy scattering, and also from considerations in cosmology. I briefly summarize some inferences about such ultra-planckian physics, and clues we might pursue towards the principles of a more fundamental theory addressing the known puzzles and paradoxes of quantum gravity.

  6. Smov FOS Plate Scale

    NASA Astrophysics Data System (ADS)

    Kinney, Anne

    1994-01-01

    The goal is to measure the precise plate scale and orientation. This will be acheived by performing a raster step and dwell sequence in the 4.3 arcsec aperture. The edges of the aperture should be avoided to prevent vignetting effects. An aperture map is required at each step of the dwell sequence. This test has to be conducted for both the RED and BLUE detectors. We will also determine the offset between the two detectors.

  7. The Extragalactic Distance Scale

    NASA Astrophysics Data System (ADS)

    Livio, Mario; Donahue, Megan; Panagia, Nino

    1997-07-01

    Participants; Preface; Foreword; Early history of the distance scale problem, S. van den Bergh; Cosmology: From Hubble to HST, M. S. Turner; Age constraints nucleocosmochronology, J. Truran; The ages of globular clusters, P. Demarque; The linearity of the Hubble flow M. Postman; Gravitational lensing and the extragalactic distance scale, R. D. Blandford andT . Kundic; Using the cosmic microwave background to constrain the Hubble constant A. Lasenby and T M. Jones; Cepheids as distance indicators, N. R. Tanvir; The I-band Tully-Fisher relation and the Hubble constant, R. Giovanell; The calibration of type 1a supernovae as standard candles, A. Saha; Focusing in on the Hubble constant, G. A. Tammann & M. Federspiel; Interim report on the calibration of the Tully-Fisher relation in the HST Key Project to measure the Hubble constant, J. Mould et al.; Hubble Space Telescope Key Project on the extragalactic distance scale, W. L. Freedman, B. F. Madore and T R. C. Kennicutt; Novae as distance indicators, M. Livio; Verifying the planetary nebula luminosity function method, G. H. Jacoby; On the possible use of radio supernovae for distance determinations, K. W. Weiler et al.; Post-AGB stars as standard candles, H. Bond; Helium core flash at the tip of the red giant branch: a population II distance indicator, B. F. Madore, W. L. Freedman and T S. Sakai; Globular clusters as distance indicators, B. C. Whitmore; Detached eclipsing binaries as primary distance and age indicators, B. Paczynski; Light echoes: geometric measurement of galaxy distances, W. B. Sparks; The SBF survey of galaxy distances J. L. Tonry; Extragalactic distance scales: The long and short of it, V. Trimble.

  8. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.; Ruppert, S.

    2002-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by analyzing aftershock sequences in the Western U.S. and Turkey using two different techniques. First we examine the observed regional S-wave spectra by fitting with a parametric model (Walter and Taylor, 2002) with and without variable stress drop scaling. Because the aftershock sequences have common stations and paths we can examine the S-wave spectra of events by size to determine what type of apparent stress scaling, if any, is most consistent with the data. Second we use regional coda envelope techniques (e.g. Mayeda and Walter, 1996; Mayeda et al, 2002) on the same events to directly measure energy and moment. The coda techniques corrects for path and site effects using an empirical Green function technique and independent calibration with surface wave derived moments. Our hope is that by carefully analyzing a very large number of events in a consistent manner using two different techniques we can start to resolve this apparent stress scaling issue. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  9. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Mayeda, K.; Walter, W. R.

    2003-04-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by applying the same methodology to a series of datasets that spans roughly 10 orders in seismic moment, M0. We will summarize recent results using a coda envelope methodology of Mayeda et al, (2003) which provide the most stable source spectral estimates to date. This methodology eliminates the complicating effects of lateral path heterogeneity, source radiation pattern, directivity, and site response (e.g., amplification, f-max and kappa). We find that in tectonically active continental crustal areas the total radiated energy scales as M00.25 whereas in regions of relatively younger oceanic crust, the stress drop is generally lower and exhibits a 1-to-1 scaling with moment. In addition to answering a fundamental question in earthquake source dynamics, this study addresses how one would scale small earthquakes in a particular region up to a future, more damaging earthquake. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  10. Urban scaling in Europe.

    PubMed

    Bettencourt, Luís M A; Lobo, José

    2016-03-01

    Over the last few decades, in disciplines as diverse as economics, geography and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the former holds well the latter is a poor descriptor of European cities. We conclude with scenarios for the size and properties of future pan-European megacities and their implications for the economic productivity, technological sophistication and regional inequalities of an integrated European urban system. PMID:26984190

  11. Is this scaling nonlinear?

    PubMed

    Leitão, J C; Miotto, J M; Gerlach, M; Altmann, E G

    2016-07-01

    One of the most celebrated findings in complex systems in the last decade is that different indexes y (e.g. patents) scale nonlinearly with the population x of the cities in which they appear, i.e. y∼x (β) ,β≠1. More recently, the generality of this finding has been questioned in studies that used new databases and different definitions of city boundaries. In this paper, we investigate the existence of nonlinear scaling, using a probabilistic framework in which fluctuations are accounted for explicitly. In particular, we show that this allows not only to (i) estimate β and confidence intervals, but also to (ii) quantify the evidence in favour of β≠1 and (iii) test the hypothesis that the observations are compatible with the nonlinear scaling. We employ this framework to compare five different models to 15 different datasets and we find that the answers to points (i)-(iii) crucially depend on the fluctuations contained in the data, on how they are modelled, and on the fact that the city sizes are heavy-tailed distributed. PMID:27493764

  12. Scaling body size fluctuations

    PubMed Central

    Giometto, Andrea; Altermatt, Florian; Carrara, Francesco; Maritan, Amos; Rinaldo, Andrea

    2013-01-01

    The size of an organism matters for its metabolic, growth, mortality, and other vital rates. Scale-free community size spectra (i.e., size distributions regardless of species) are routinely observed in natural ecosystems and are the product of intra- and interspecies regulation of the relative abundance of organisms of different sizes. Intra- and interspecies distributions of body sizes are thus major determinants of ecosystems’ structure and function. We show experimentally that single-species mass distributions of unicellular eukaryotes covering different phyla exhibit both characteristic sizes and universal features over more than four orders of magnitude in mass. Remarkably, we find that the mean size of a species is sufficient to characterize its size distribution fully and that the latter has a universal form across all species. We show that an analytical physiological model accounts for the observed universality, which can be synthesized in a log-normal form for the intraspecies size distributions. We also propose how ecological and physiological processes should interact to produce scale-invariant community size spectra and discuss the implications of our results on allometric scaling laws involving body mass. PMID:23487793

  13. Urban scaling in Europe

    PubMed Central

    Bettencourt, Luís M. A.; Lobo, José

    2016-01-01

    Over the last few decades, in disciplines as diverse as economics, geography and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the former holds well the latter is a poor descriptor of European cities. We conclude with scenarios for the size and properties of future pan-European megacities and their implications for the economic productivity, technological sophistication and regional inequalities of an integrated European urban system. PMID:26984190

  14. The cepheid temperature scale

    NASA Astrophysics Data System (ADS)

    Teays, Terry John

    The temperatures of Cepheid variable stars are determined from their energy distributions, using the technique of spectrum scanning. The observations were obtained with Kitt Peak National Observatory's intensified Reticon Scanner and Cerro Tololo Inter-American Observatory's two-channel scanner. Six well-observed Cepheids in galactic open clusters were examined at various phases of their pulsation cycles. Recently determined color excesses, based on the uvby photometry of Schmidt, were used to scale the reddening curves of Nandy and thereby correct the scans for the effects of interstellar reddening. The temperature of each energy distribution was found by matching them to the emergent flux calculated from Kurucz's model atmospheres. B-V color curves, taken from the literature, were used to establish the color index of the Cepheids at the phases for which temperatures were measured. A linear least-squares fit to these data yielded the color temperature relation. Teff = 3.904x0.23 (B-V)0. King et al. discussed the Cepheid mass anomaly (i.e., the discrepancy between masses derived from pulsation theory and those obtained from evolutionary theory) and concluded that a temperature scale as cool as the the one above would resolve this long, standing problem. However, the use of this temperature scale, along with Schmidt's PLC relation and color excesses, normal solar abundances, and Faulkner's formula for the pulsation constant, Q lead to pulsation masses that are still lower than the evolutionary masses.

  15. Is this scaling nonlinear?

    PubMed Central

    2016-01-01

    One of the most celebrated findings in complex systems in the last decade is that different indexes y (e.g. patents) scale nonlinearly with the population x of the cities in which they appear, i.e. y∼xβ,β≠1. More recently, the generality of this finding has been questioned in studies that used new databases and different definitions of city boundaries. In this paper, we investigate the existence of nonlinear scaling, using a probabilistic framework in which fluctuations are accounted for explicitly. In particular, we show that this allows not only to (i) estimate β and confidence intervals, but also to (ii) quantify the evidence in favour of β≠1 and (iii) test the hypothesis that the observations are compatible with the nonlinear scaling. We employ this framework to compare five different models to 15 different datasets and we find that the answers to points (i)–(iii) crucially depend on the fluctuations contained in the data, on how they are modelled, and on the fact that the city sizes are heavy-tailed distributed. PMID:27493764

  16. Estimation of local spatial scale

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1987-01-01

    The concept of local scale asserts that for a given class of psychophysical measurements, performance at any two visual field locations is equated by magnifying the targets by the local scale associated with each location. Local scale has been hypothesized to be equal to cortical magnification or alternatively to the linear density of receptors or ganglion cells. Here, it is shown that it is possible to estimate local scale without prior knowledge about the scale or its physiological basis.

  17. Mechanism for salt scaling

    NASA Astrophysics Data System (ADS)

    Valenza, John J., II

    Salt scaling is superficial damage caused by freezing a saline solution on the surface of a cementitious body. The damage consists of the removal of small chips or flakes of binder. The discovery of this phenomenon in the early 1950's prompted hundreds of experimental studies, which clearly elucidated the characteristics of this damage. In particular it was shown that a pessimum salt concentration exists, where a moderate salt concentration (˜3%) results in the most damage. Despite the numerous studies, the mechanism responsible for salt scaling has not been identified. In this work it is shown that salt scaling is a result of the large thermal expansion mismatch between ice and the cementitious body, and that the mechanism responsible for damage is analogous to glue-spalling. When ice forms on a cementitious body a bi-material composite is formed. The thermal expansion coefficient of the ice is ˜5 times that of the underlying body, so when the temperature of the composite is lowered below the melting point, the ice goes into tension. Once this stress exceeds the strength of the ice, cracks initiate in the ice and propagate into the surface of the cementitious body, removing a flake of material. The glue-spall mechanism accounts for all of the characteristics of salt scaling. In particular, a theoretical analysis is presented which shows that the pessimum concentration is a consequence of the effect of brine pockets on the mechanical properties of ice, and that the damage morphology is accounted for by fracture mechanics. Finally, empirical evidence is presented that proves that the glue-small mechanism is the primary cause of salt scaling. The primary experimental tool used in this study is a novel warping experiment, where a pool of liquid is formed on top of a thin (˜3 mm) plate of cement paste. Stresses in the plate, including thermal expansion mismatch, result in warping of the plate, which is easily detected. This technique revealed the existence of

  18. The Practicality of Behavioral Observation Scales, Behavioral Expectation Scales, and Trait Scales.

    ERIC Educational Resources Information Center

    Wiersma, Uco; Latham, Gary P.

    1986-01-01

    The practicality of three appraisal instruments was measured in terms of user preference, namely, behavioral observation scales (BOS), behavioral expectation scales (BES), and trait scales. In all instances, BOS were preferred to BES, and in all but two instances, BOS were viewed as superior to trait scales. (Author/ABB)

  19. Comparing the theoretical versions of the Beaufort scale, the T-Scale and the Fujita scale

    NASA Astrophysics Data System (ADS)

    Meaden, G. Terence; Kochev, S.; Kolendowicz, L.; Kosa-Kiss, A.; Marcinoniene, Izolda; Sioutas, Michalis; Tooming, Heino; Tyrrell, John

    2007-02-01

    2005 is the bicentenary of the Beaufort Scale and its wind-speed codes: the marine version in 1805 and the land version later. In the 1920s when anemometers had come into general use, the Beaufort Scale was quantified by a formula based on experiment. In the early 1970s two tornado wind-speed scales were proposed: (1) an International T-Scale based on the Beaufort Scale; and (2) Fujita's damage scale developed for North America. The International Beaufort Scale and the T-Scale share a common root in having an integral theoretical relationship with an established scientific basis, whereas Fujita's Scale introduces criteria that make its intensities non-integral with Beaufort. Forces on the T-Scale, where T stands for Tornado force, span the range 0 to 10 which is highly useful world wide. The shorter range of Fujita's Scale (0 to 5) is acceptable for American use but less convenient elsewhere. To illustrate the simplicity of the decimal T-Scale, mean hurricane wind speed of Beaufort 12 is T2 on the T-Scale but F1.121 on the F-Scale; while a tornado wind speed of T9 (= B26) becomes F4.761. However, the three wind scales can be uni-fied by either making F-Scale numbers exactly half the magnitude of T-Scale numbers [i.e. F'half = T / 2 = (B / 4) - 4] or by doubling the numbers of this revised version to give integral equivalence with the T-Scale. The result is a decimal formula F'double = T = (B / 2) - 4 named the TF-Scale where TF stands for Tornado Force. This harmonious 10-digit scale has all the criteria needed for world-wide practical effectiveness.

  20. Soil organic carbon across scales.

    PubMed

    O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B

    2015-10-01

    Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management. PMID:25918852

  1. Reconsidering Fault Slip Scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J. S.; Wech, A.; Creager, K. C.; Obara, K.; Agnew, D. C.

    2015-12-01

    The scaling of fault slip events given by the relationship between the scalar moment M0, and duration T, potentially provides key constraints on the underlying physics controlling slip. Many studies have suggested that measurements of M0 and T are related as M0=KfT3 for 'fast' slip events (earthquakes) and M0=KsT for 'slow' slip events, in which Kf and Ks are proportionality constants, although some studies have inferred intermediate relations. Here 'slow' and 'fast' refer to slip front propagation velocities, either so slow that seismic radiation is too small or long period to be measurable or fast enough that dynamic processes may be important for the slip process and measurable seismic waves radiate. Numerous models have been proposed to explain the differing M0-T scaling relations. We show that a single, simple dislocation model of slip events within a bounded slip zone may explain nearly all M0-T observations. Rather than different scaling for fast and slow populations, we suggest that within each population the scaling changes from M0 proportional to T3 to T when the slipping area reaches the slip zone boundaries and transitions from unbounded, 2-dimensional to bounded, 1-dimensional growth. This transition has not been apparent previously for slow events because data have sampled only the bounded regime and may be obscured for earthquakes when observations from multiple tectonic regions are combined. We have attempted to sample the expected transition between bounded and unbounded regimes for the slow slip population, measuring tremor cluster parameters from catalogs for Japan and Cascadia and using them as proxies for small slow slip event characteristics. For fast events we employed published earthquake slip models. Observations corroborate our hypothesis, but highlight observational difficulties. We find that M0-T observations for both slow and fast slip events, spanning 12 orders of magnitude in M0, are consistent with a single model based on dislocation

  2. Small-scale strength

    SciTech Connect

    Anderson, J.L.

    1995-11-01

    In the world of power project development there is a market for smaller scale cogeneration projects in the range of 1MW to 10MW. In the European Union alone, this range will account for about $25 Billion in value over the next 10 years. By adding the potential that exists in Eastern Europe, the numbers are even more impressive. In Europe, only about 7 percent of needed electrical power is currently produced through cogeneration installations; this is expected to change to around 15 percent by the year 2000. Less than one year ago, two equipment manufacturers formed Dutch Power Partners (DPP) to focus on the market for industrial cogeneration throughout Europe.

  3. Gravitational scaling dimensions

    SciTech Connect

    Hamber, Herbert W.

    2000-06-15

    A model for quantized gravitation based on simplicial lattice discretization is studied in detail using a comprehensive finite size scaling analysis combined with renormalization group methods. The results are consistent with a value for the universal critical exponent for gravitation, {nu}=1/3, and suggest a simple relationship between Newton's constant, the gravitational correlation length and the observable average space-time curvature. Some perhaps testable phenomenological implications of these results are discussed. To achieve a high numerical accuracy in the evaluation of the lattice path integral a dedicated parallel machine was assembled. (c) 2000 The American Physical Society.

  4. The extragalactic distance scale

    NASA Astrophysics Data System (ADS)

    Rowan-Robinson, Michael

    1988-03-01

    Recent advances in the determination of the extragalactic distance scale are discussed, reviewing the results of observational and theoretical investigations from the period 1983-1987. Consideration is given to the galactic calibration of the Cepheids, the extension of the nova method to the Virgo cluster, improvements in the supernova distance method, the reasons why the Tully-Fisher method gives distances shorter than those of other techniques, and a modified Faber-Jackson distance method for elliptical galaxies. Numerical results are compiled in extensive tables and graphs, and it is concluded that only minor corrections to the cosmological distance ladder of Rowan-Robinson (1985) are required.

  5. Static Scale Conversion (SSC)

    SciTech Connect

    2007-01-19

    The Static Scale Conversion (SSC) software is a unique enhancement to the AIMVEE system. It enables a SSC to weigh and measure vehicles and cargo dynamically (i.e., as they pass over the large scale. Included in the software is the AIMVEE computer code base. The SSC and AIMVEE computer system electronically continue to retrieve deployment information, identify vehicle automatically and determine total weight, individual axle weights, axle spacing and center-of-balance for any wheeled vehicle in motion. The AIMVEE computer code system can also perform these functions statically for both wheel vehicles and cargo with information. The AIMVEE computer code system incorporates digital images and applies cubing algorithms to determine length, width, height for cubic dimensions of both vehicle and cargo. Once all this information is stored, it electronically links to data collection and dissemination systems to provide “actual” weight and measurement information for planning, deployment, and in-transit visibility.

  6. Ultraslow scaled Brownian motion

    NASA Astrophysics Data System (ADS)

    Bodrova, Anna S.; Chechkin, Aleksei V.; Cherstvy, Andrey G.; Metzler, Ralf

    2015-06-01

    We define and study in detail utraslow scaled Brownian motion (USBM) characterized by a time dependent diffusion coefficient of the form D(t)≃ 1/t. For unconfined motion the mean squared displacement (MSD) of USBM exhibits an ultraslow, logarithmic growth as function of time, in contrast to the conventional scaled Brownian motion. In a harmonic potential the MSD of USBM does not saturate but asymptotically decays inverse-proportionally to time, reflecting the highly non-stationary character of the process. We show that the process is weakly non-ergodic in the sense that the time averaged MSD does not converge to the regular MSD even at long times, and for unconfined motion combines a linear lag time dependence with a logarithmic term. The weakly non-ergodic behaviour is quantified in terms of the ergodicity breaking parameter. The USBM process is also shown to be ageing: observables of the system depend on the time gap between initiation of the test particle and start of the measurement of its motion. Our analytical results are shown to agree excellently with extensive computer simulations.

  7. Scaling in Transportation Networks

    PubMed Central

    Louf, Rémi; Roth, Camille; Barthelemy, Marc

    2014-01-01

    Subway systems span most large cities, and railway networks most countries in the world. These networks are fundamental in the development of countries and their cities, and it is therefore crucial to understand their formation and evolution. However, if the topological properties of these networks are fairly well understood, how they relate to population and socio-economical properties remains an open question. We propose here a general coarse-grained approach, based on a cost-benefit analysis that accounts for the scaling properties of the main quantities characterizing these systems (the number of stations, the total length, and the ridership) with the substrate's population, area and wealth. More precisely, we show that the length, number of stations and ridership of subways and rail networks can be estimated knowing the area, population and wealth of the underlying region. These predictions are in good agreement with data gathered for about subway systems and more than railway networks in the world. We also show that train networks and subway systems can be described within the same framework, but with a fundamental difference: while the interstation distance seems to be constant and determined by the typical walking distance for subways, the interstation distance for railways scales with the number of stations. PMID:25029528

  8. Static Scale Conversion (SSC)

    2007-01-19

    The Static Scale Conversion (SSC) software is a unique enhancement to the AIMVEE system. It enables a SSC to weigh and measure vehicles and cargo dynamically (i.e., as they pass over the large scale. Included in the software is the AIMVEE computer code base. The SSC and AIMVEE computer system electronically continue to retrieve deployment information, identify vehicle automatically and determine total weight, individual axle weights, axle spacing and center-of-balance for any wheeled vehicle inmore » motion. The AIMVEE computer code system can also perform these functions statically for both wheel vehicles and cargo with information. The AIMVEE computer code system incorporates digital images and applies cubing algorithms to determine length, width, height for cubic dimensions of both vehicle and cargo. Once all this information is stored, it electronically links to data collection and dissemination systems to provide “actual” weight and measurement information for planning, deployment, and in-transit visibility.« less

  9. A scale of risk.

    PubMed

    Gardoni, Paolo; Murphy, Colleen

    2014-07-01

    This article proposes a conceptual framework for ranking the relative gravity of diverse risks. This framework identifies the moral considerations that should inform the evaluation and comparison of diverse risks. A common definition of risk includes two dimensions: the probability of occurrence and the associated consequences of a set of hazardous scenarios. This article first expands this definition to include a third dimension: the source of a risk. The source of a risk refers to the agents involved in the creation or maintenance of a risk and captures a central moral concern about risks. Then, a scale of risk is proposed to categorize risks along a multidimensional ranking, based on a comparative evaluation of the consequences, probability, and source of a given risk. A risk is ranked higher on the scale the larger the consequences, the greater the probability, and the more morally culpable the source. The information from the proposed comparative evaluation of risks can inform the selection of priorities for risk mitigation. PMID:24372160

  10. Returns to Scale and Economies of Scale: Further Observations.

    ERIC Educational Resources Information Center

    Gelles, Gregory M.; Mitchell, Douglas W.

    1996-01-01

    Maintains that most economics textbooks continue to repeat past mistakes concerning returns to scale and economies of scale under assumptions of constant and nonconstant input prices. Provides an adaptation for a calculus-based intermediate microeconomics class that demonstrates the pointwise relationship between returns to scale and economies of…

  11. A Validity Scale for the Sharp Consumer Satisfaction Scales.

    ERIC Educational Resources Information Center

    Tanner, Barry A.; Stacy, Webb, Jr.

    1985-01-01

    A validity scale for the Sharp Consumer Satisfaction Scale was developed and used in experiments to assess patients' satisfaction with community mental health centers. The scale discriminated between clients who offered suggestions and those who did not. It also improved researcher's ability to predict true scores from obtained scores. (DWH)

  12. Scale in Education Research: Towards a Multi-Scale Methodology

    ERIC Educational Resources Information Center

    Noyes, Andrew

    2013-01-01

    This article explores some theoretical and methodological problems concerned with scale in education research through a critique of a recent mixed-method project. The project was framed by scale metaphors drawn from the physical and earth sciences and I consider how recent thinking around scale, for example, in ecosystems and human geography might…

  13. Global scale precipitation from monthly to centennial scales: empirical space-time scaling analysis, anthropogenic effects

    NASA Astrophysics Data System (ADS)

    de Lima, Isabel; Lovejoy, Shaun

    2016-04-01

    The characterization of precipitation scaling regimes represents a key contribution to the improved understanding of space-time precipitation variability, which is the focus here. We conduct space-time scaling analyses of spectra and Haar fluctuations in precipitation, using three global scale precipitation products (one instrument based, one reanalysis based, one satellite and gauge based), from monthly to centennial scales and planetary down to several hundred kilometers in spatial scale. Results show the presence - similarly to other atmospheric fields - of an intermediate "macroweather" regime between the familiar weather and climate regimes: we characterize systematically the macroweather precipitation temporal and spatial, and joint space-time statistics and variability, and the outer scale limit of temporal scaling. These regimes qualitatively and quantitatively alternate in the way fluctuations vary with scale. In the macroweather regime, the fluctuations diminish with time scale (this is important for seasonal, annual, and decadal forecasts) while anthropogenic effects increase with time scale. Our approach determines the time scale at which the anthropogenic signal can be detected above the natural variability noise: the critical scale is about 20 - 40 yrs (depending on the product, on the spatial scale). This explains for example why studies that use data covering only a few decades do not easily give evidence of anthropogenic changes in precipitation, as a consequence of warming: the period is too short. Overall, while showing that precipitation can be modeled with space-time scaling processes, our results clarify the different precipitation scaling regimes and further allow us to quantify the agreement (and lack of agreement) of the precipitation products as a function of space and time scales. Moreover, this work contributes to clarify a basic problem in hydro-climatology, which is to measure precipitation trends at decadal and longer scales and to

  14. A Few Problems Involving Scale.

    ERIC Educational Resources Information Center

    McKillip, William D.; Kay, Cynthia Stinnette

    1985-01-01

    Some applications of ratio and proportion to scale drawing involving geometric figures are given. The activities or problems concern the earth and space, scale speeds, and the earth-moon system. (MNS)

  15. MULTIPLE SCALES FOR SUSTAINABLE RESULTS

    EPA Science Inventory

    This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...

  16. The Gains from Vertical Scaling

    ERIC Educational Resources Information Center

    Briggs, Derek C.; Domingue, Ben

    2013-01-01

    It is often assumed that a vertical scale is necessary when value-added models depend upon the gain scores of students across two or more points in time. This article examines the conditions under which the scale transformations associated with the vertical scaling process would be expected to have a significant impact on normative interpretations…

  17. Westside Test Anxiety Scale Validation

    ERIC Educational Resources Information Center

    Driscoll, Richard

    2007-01-01

    The Westside Test Anxiety Scale is a brief, ten item instrument designed to identify students with anxiety impairments who could benefit from an anxiety-reduction intervention. The scale items cover self-assessed anxiety impairment and cognitions which can impair performance. Correlations between anxiety-reduction as measured by the scale and…

  18. Indian scales and inventories.

    PubMed

    Venkatesan, S

    2010-01-01

    This conceptual, perspective and review paper on Indian scales and inventories begins with clarification on the historical and contemporary meanings of psychometry before linking itself to the burgeoning field of clinimetrics in their applications to the practice of clinical psychology and psychiatry. Clinimetrics is explained as a changing paradigm in the design, administration, and interpretation of quantitative tests, techniques or procedures applied to measurement of clinical variables, traits and processes. As an illustrative sample, this article assembles a bibliographic survey of about 105 out of 2582 research papers (4.07%) scanned through 51 back dated volumes covering 185 issues related to clinimetry as reviewed across a span of over fifty years (1958-2009) in the Indian Journal of Psychiatry. A content analysis of the contributions across distinct categories of mental measurements is explained before linkages are proposed for future directions along these lines. PMID:21836709

  19. Scaling aircraft noise perception.

    NASA Technical Reports Server (NTRS)

    Ollerhead, J. B.

    1973-01-01

    Following a brief review of the background to the study, an extensive experiment is described which was undertaken to assess the practical differences between numerous alternative methods for calculating the perceived levels of individual aircraft flyover wounds. One hundred and twenty recorded sounds, including jets, turboprops, piston aircraft and helicopters were rated by a panel of subjects in a pair comparison test. The results were analyzed to evaluate a number of noise rating procedures, in terms of their ability to accurately estimate both relative and absolute perceived noise levels over a wider dynamic range (84-115 dB SPL) than had generally been used in previous experiments. Performances of the different scales were examined in detail for different aircraft categories, and the merits of different band level summation procedures, frequency weighting functions, duration and tone corrections were investigated.

  20. Indian scales and inventories

    PubMed Central

    Venkatesan, S.

    2010-01-01

    This conceptual, perspective and review paper on Indian scales and inventories begins with clarification on the historical and contemporary meanings of psychometry before linking itself to the burgeoning field of clinimetrics in their applications to the practice of clinical psychology and psychiatry. Clinimetrics is explained as a changing paradigm in the design, administration, and interpretation of quantitative tests, techniques or procedures applied to measurement of clinical variables, traits and processes. As an illustrative sample, this article assembles a bibliographic survey of about 105 out of 2582 research papers (4.07%) scanned through 51 back dated volumes covering 185 issues related to clinimetry as reviewed across a span of over fifty years (1958-2009) in the Indian Journal of Psychiatry. A content analysis of the contributions across distinct categories of mental measurements is explained before linkages are proposed for future directions along these lines. PMID:21836709

  1. Galactic-scale civilization

    NASA Technical Reports Server (NTRS)

    Kuiper, T. B. H.

    1980-01-01

    Evolutionary arguments are presented in favor of the existence of civilization on a galactic scale. Patterns of physical, chemical, biological, social and cultural evolution leading to increasing levels of complexity are pointed out and explained thermodynamically in terms of the maximization of free energy dissipation in the environment of the organized system. The possibility of the evolution of a global and then a galactic human civilization is considered, and probabilities that the galaxy is presently in its colonization state and that life could have evolved to its present state on earth are discussed. Fermi's paradox of the absence of extraterrestrials in light of the probability of their existence is noted, and a variety of possible explanations is indicated. Finally, it is argued that although mankind may be the first occurrence of intelligence in the galaxy, it is unjustified to presume that this is so.

  2. Extreme Scale Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Shoemaker, Deirdre

    2009-11-01

    We live in extraordinary times. With increasingly sophisticated observatories opening up new vistas on the universe, astrophysics is becoming more complex and data-driven. The success in understanding astrophysical systems that are inherently multi-physical, nonlinear systems demands realism in our models of the phenomena. We cannot hope to advance the realism of these models to match the expected sophistication of future observations without extreme-scale computation. Just one example is the advent of gravitational wave astronomy. Detectors like LIGO are about to make the first ever detection of gravitational waves. The gravitational waves are produced during violent events such as the merger of two black holes. The detection of these waves or ripples in the fabric of spacetime is a formidable undertaking, requiring innovative engineering, powerful data analysis tools and careful theoretical modeling. I will discuss the computational and theoretical challenges ahead in our new understanding of physics and astronomy where gravity exhibits its strongest grip on our spacetime.

  3. An elastica arm scale

    PubMed Central

    Bosi, F.; Misseroni, D.; Dal Corso, F.; Bigoni, D.

    2014-01-01

    The concept of a ‘deformable arm scale’ (completely different from a traditional rigid arm balance) is theoretically introduced and experimentally validated. The idea is not intuitive, but is the result of nonlinear equilibrium kinematics of rods inducing configurational forces, so that deflection of the arms becomes necessary for equilibrium, which would be impossible for a rigid system. In particular, the rigid arms of usual scales are replaced by a flexible elastic lamina, free to slide in a frictionless and inclined sliding sleeve, which can reach a unique equilibrium configuration when two vertical dead loads are applied. Prototypes designed to demonstrate the feasibility of the system show a high accuracy in the measurement of load within a certain range of use. Finally, we show that the presented results are strongly related to snaking of confined beams, with implications for locomotion of serpents, plumbing and smart oil drilling. PMID:25197248

  4. The Outcome of Agitation in Poisoned Patients in an Iranian Tertiary Care University Hospital

    PubMed Central

    Sabzghabaee, Ali Mohammad; Yaraghi, Ahmad; Khalilidehkordi, Elham; Mirhosseini, Seyyed Mohammad Mahdy; Beheshtian, Elham; Eizadi-Mood, Nastaran

    2014-01-01

    Introduction. This study was conducted to evaluate and document the frequency and causes of agitation, the symptoms accompanying this condition in intoxications, relationship between agitation score on admission and different variables, and the outcome of therapy in a tertiary care referral poisoning center in Iran. Methods. In this prospective observational study which was done in 2012, 3010 patients were screened for agitation at the time of admission using the Richmond Agitation Sedation Scale. Demographic data including age, gender, and the drug ingested were also recorded. The patients' outcome was categorized as recovery without complications, recovery with complications (hyperthermia, renal failure, and other causes), and death. Results. Agitation was observed in 56 patients (males, n = 41), mostly aged 19–40 years (n = 38) and more frequently in illegal substance (stimulants, opioids and also alcohol) abusers. Agitation score was not significantly related to the age, gender, and previous history of psychiatric disorders. Forty nine patients had recovery without any complication. The need for mechanical ventilation was the most frequent complication. None of the patients died. Conclusion. Drug abuse seems to be a must-to-consider etiology for patients presenting with acute agitation and its morbidity and mortality could be low in agitated poisoning cases if prompt supportive care is performed. PMID:25548668

  5. Fine-scale Textures

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site]

    Released 19 May 2003

    This image shows fine-scale textures around a crater southwest of Athabasca Vallis. These fine scale ridges are most likely the remnants of older flood eroded layered rocks and not longitudinal grooves carved out of the landscape by flooding. These features are ridges and not grooves. Also note the layers visible on the southeast side of the island.

    Image information: VIS instrument. Latitude 9.6, Longitude 155.9 East (204.1). 19 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  6. Full Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)

  7. Full Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).

  8. Scaling up: Assessing social impacts at the macro-scale

    SciTech Connect

    Schirmer, Jacki

    2011-04-15

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  9. The Wilderness Novelty Seeking Scale.

    PubMed

    Próchniak, Piotr

    2014-10-01

    The purpose was to present the new scale of novelty seeking in the context of wilderness. A study of the psychometric properties of the Wilderness Novelty Seeking Scale was conducted, with an exploratory and a confirmatory factor analysis being carried out and the coefficients of the scale's reliability and stability over time being tested. The convergent validity of the WNSS scale was indicated by positive correlations with sensation seeking, openness to experience, and need for cognition. The divergent validity of the WNSS scale was indicated by non-significant correlations with state-trait anxiety and depression. The correlation between the Wilderness Novelty Seeking Scale and psychological well-being was analyzed. The Wilderness Novelty Seeking Scale seems to be a reliable and valid tool. PMID:25202995

  10. Solar system to scale

    NASA Astrophysics Data System (ADS)

    Gerwig López, Susanne

    2016-04-01

    One of the most important successes in astronomical observations has been to determine the limit of the Solar System. It is said that the first man able to measure the distance Earth-Sun with only a very slight mistake, in the second century BC, was the wise Greek man Aristarco de Samos. Thanks to Newtońs law of universal gravitation, it was possible to measure, with a little margin of error, the distances between the Sun and the planets. Twelve-year old students are very interested in everything related to the universe. However, it seems too difficult to imagine and understand the real distances among the different celestial bodies. To learn the differences among the inner and outer planets and how far away the outer ones are, I have considered to make my pupils work on the sizes and the distances in our solar system constructing it to scale. The purpose is to reproduce our solar system to scale on a cardboard. The procedure is very easy and simple. Students of first year of ESO (12 year-old) receive the instructions in a sheet of paper (things they need: a black cardboard, a pair of scissors, colored pencils, a ruler, adhesive tape, glue, the photocopies of the planets and satellites, the measurements they have to use). In another photocopy they get the pictures of the edge of the sun, the planets, dwarf planets and some satellites, which they have to color, cut and stick on the cardboard. This activity is planned for both Spanish and bilingual learning students as a science project. Depending on the group, they will receive these instructions in Spanish or in English. When the time is over, the students bring their works on their cardboard to the class. They obtain a final mark: passing, good or excellent, depending on the accuracy of the measurements, the position of all the celestial bodies, the asteroids belts, personal contributions, etc. If any of the students has not followed the instructions they get the chance to remake it again properly, in order not

  11. UltraScale Computing

    NASA Astrophysics Data System (ADS)

    Maynard, , Jr.

    1997-08-01

    The Defense Advanced Research Projects Agency Information Technology Office (DARPA/ITO) supports research in technology for defense-critical applications. Defense Applications are always insatiable consumers of computing. Futuristic applications such as automated image interpretation/whole vehicle radar-cross-section/real-time prototyping/faster-than-real-time simulation will require computing capabilities orders-of-magnitude beyond the best performance that can be projected from contemporary scalable parallel processors. To reach beyond the silicon digital paradigm, DARPA has initiated a program in UltraScale Computing to explore the domain of innovative computational models, methods, and mechanisms. The objective is to encourage a complete re-thinking of computing. Novel architectures, program synthesis, and execution environments are needed as well as alternative underlying physical mechanisms including molecular, biological, optical and quantum mechanical processes. Development of these advanced computing technologies will offer spectacular performance and cost improvements beyond the threshold of traditional materials and processes. The talk will focus on novel approaches for employing vastly more computational units than shrinking transistors will enable and exploration of the biological options for solving computationally difficult problems.

  12. SPACE BASED INTERCEPTOR SCALING

    SciTech Connect

    G. CANAVAN

    2001-02-01

    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.

  13. Loops: Twisting and Scaling

    NASA Astrophysics Data System (ADS)

    Walsh, R. W.

    2004-01-01

    Loop-like structures are the fundamental magnetic building blocks of the solar atmosphere. Recent space-based EUV and X-ray satellite observations (from Yohkoh SOHO and TRACE) have challenged the view that these features are simply static gravitationally stratified plasma pipes. Rather it is now surmised that each loop may consist of a bundle of fine plasma threads that are twisted around one another and can brighten independently. This invited review will outline the latest developments in ""untangling"" the topology of these features through three dimensional magnetohydrodynamic modelling and how their properties are being deduced through spectroscopic observations coupled to theoretical scaling laws. In particular recent interest has centred on how the observed thermal profile along loops can be employed as a tool to diagnose any localised energy input to the structure and hence constrain the presence of a particular coronal heating mechanism. The dynamic nature of loops will be highlighted and the implications of superior resolution plasma thread observations (whether spatial temporal or spectral) from future space missions (SolarB STEREO SDO and Solar Orbiter) will be discussed.

  14. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  15. Scaling and Urban Growth

    NASA Astrophysics Data System (ADS)

    Benguigui, L.; Czamanski, D.; Marinov, M.

    This paper presents an analysis of the growth of towns in the Tel Aviv metropolis. It indicates a similarity in the variation of populations so that the population functions can be scaled and superposed one onto the other. This is a strong indication that the growth mechanism for all these towns is the same. Two different models are presented to interpret the population growth: one is an analytic model while the other is a computer simulation. In the dynamic analytic model, we introduced the concept of characteristic time. The growth has two parts: in the first, the derivative is an increasing function, the town is very attractive and there is short delay between decision to build and complete realization of the process. At this time, there is no shortage of land. However, around a specific time, the delay begins to increase and there is lack of available land. The rate of the population variation decreases until saturation. The two models give a good quantitative description.

  16. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  17. Scaling of structural failure

    SciTech Connect

    Bazant, Z.P.; Chen, Er-Ping

    1997-01-01

    This article attempts to review the progress achieved in the understanding of scaling and size effect in the failure of structures. Particular emphasis is placed on quasibrittle materials for which the size effect is complicated. Attention is focused on three main types of size effects, namely the statistical size effect due to randomness of strength, the energy release size effect, and the possible size effect due to fractality of fracture or microcracks. Definitive conclusions on the applicability of these theories are drawn. Subsequently, the article discusses the application of the known size effect law for the measurement of material fracture properties, and the modeling of the size effect by the cohesive crack model, nonlocal finite element models and discrete element models. Extensions to compression failure and to the rate-dependent material behavior are also outlined. The damage constitutive law needed for describing a microcracked material in the fracture process zone is discussed. Various applications to quasibrittle materials, including concrete, sea ice, fiber composites, rocks and ceramics are presented.

  18. Observation of scaling violations in scaled momentum distributions at HERA

    NASA Astrophysics Data System (ADS)

    ZEUS Collaboration; Breitweg, J.; Derrick, M.; Krakauer, D.; Magill, S.; Mikunas, D.; Musgrave, B.; Repond, J.; Stanek, R.; Talaga, R. L.; Yoshida, R.; Zhang, H.; Mattingly, M. C. K.; Anselmo, F.; Antonioli, P.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, A.; Bruni, G.; Cara Romeo, G.; Castellini, G.; Cifarelli, L.; Cindolo, F.; Contin, A.; Corradi, M.; de Pasquale, S.; Gialas, I.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Margotti, A.; Massam, T.; Nania, R.; Palmonari, F.; Pesci, A.; Polini, A.; Ricci, F.; Sartorelli, G.; Zamora Garcia, Y.; Zichichi, A.; Amelung, C.; Bornheim, A.; Brock, I.; Coböken, K.; Crittenden, J.; Deffner, R.; Eckert, M.; Grothe, M.; Hartmann, H.; Heinloth, K.; Heinz, L.; Hilger, E.; Jakob, H.-P.; Katz, U. F.; Kerger, R.; Paul, E.; Pfeiffer, M.; Rembser, Ch.; Stamm, J.; Wedemeyer, R.; Wieber, H.; Bailey, D. S.; Campbell-Robson, S.; Cottingham, W. N.; Foster, B.; Hall-Wilton, R.; Hayes, M. E.; Heath, G. P.; Heath, H. F.; McFall, J. D.; Piccioni, D.; Roff, D. G.; Tapper, R. J.; Arneodo, M.; Ayad, R.; Capua, M.; Garfagnini, A.; Iannotti, L.; Schioppa, M.; Susinno, G.; Kim, J. Y.; Lee, J. H.; Lim, I. T.; Pac, M. Y.; Caldwell, A.; Cartiglia, N.; Jing, Z.; Liu, W.; Mellado, B.; Parsons, J. A.; Ritz, S.; Sampson, S.; Sciulli, F.; Straub, P. B.; Zhu, Q.; Borzemski, P.; Chwastowski, J.; Eskreys, A.; Figiel, J.; Klimek, K.; Przybycień , M. B.; Zawiejski, L.; Adamczyk, L.; Bednarek, B.; Bukowy, M.; Jeleń , K.; Kisielewska, D.; Kowalski, T.; Przybycień , M.; Rulikowska-Zarȩ Bska, E.; Suszycki, L.; Zaja C, J.; Duliń Ski, Z.; Kotań Ski, A.; Abbiendi, G.; Bauerdick, L. A. T.; Behrens, U.; Beier, H.; Bienlein, J. K.; Cases, G.; Deppe, O.; Desler, K.; Drews, G.; Fricke, U.; Gilkinson, D. J.; Glasman, C.; Göttlicher, P.; Haas, T.; Hain, W.; Hasell, D.; Johnson, K. F.; Kasemann, M.; Koch, W.; Kötz, U.; Kowalski, H.; Labs, J.; Lindemann, L.; Löhr, B.; Löwe, M.; Mań Czak, O.; Milewski, J.; Monteiro, T.; Ng, J. S. T.; Notz, D.; Ohrenberg, K.; Park, I. H.; Pellegrino, A.; Pelucchi, F.; Piotrzkowski, K.; Roco, M.; Rohde, M.; Roldán, J.; Ryan, J. J.; Savin, A. A.; Schneekloth, U.; Selonke, F.; Surrow, B.; Tassi, E.; Voß, T.; Westphal, D.; Wolf, G.; Wollmer, U.; Youngman, C.; Zsolararnecki, A. F.; Zeuner, W.; Burow, B. D.; Grabosch, H. J.; Meyer, A.; Schlenstedt, S.; Barbagli, G.; Gallo, E.; Pelfer, P.; Maccarrone, G.; Votano, L.; Bamberger, A.; Eisenhardt, S.; Markun, P.; Trefzger, T.; Wölfle, S.; Bromley, J. T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; MacDonald, N.; Saxon, D. H.; Sinclair, L. E.; Strickland, E.; Waugh, R.; Bohnet, I.; Gendner, N.; Holm, U.; Meyer-Larsen, A.; Salehi, H.; Wick, K.; Gladilin, L. K.; Horstmann, D.; Kçira, D.; Klanner, R.; Lohrmann, E.; Poelz, G.; Schott, W.; Zetsche, F.; Bacon, T. C.; Butterworth, I.; Cole, J. E.; Howell, G.; Hung, B. H. Y.; Lamberti, L.; Long, K. R.; Miller, D. B.; Pavel, N.; Prinias, A.; Sedgbeer, J. K.; Sideris, D.; Mallik, U.; Wang, S. M.; Wu, J. T.; Cloth, P.; Filges, D.; Fleck, J. I.; Ishii, T.; Kuze, M.; Suzuki, I.; Tokushuku, K.; Yamada, S.; Yamauchi, K.; Yamazaki, Y.; Hong, S. J.; Lee, S. B.; Nam, S. W.; Park, S. K.; Barreiro, F.; Fernández, J. P.; García, G.; Graciani, R.; Hernández, J. M.; Hervás, L.; Labarga, L.; Martínez, M.; del Peso, J.; Puga, J.; Terrón, J.; de Trocóniz, J. F.; Corriveau, F.; Hanna, D. S.; Hartmann, J.; Hung, L. W.; Murray, W. N.; Ochs, A.; Riveline, M.; Stairs, D. G.; St-Laurent, M.; Ullmann, R.; Tsurugai, T.; Bashkirov, V.; Dolgoshein, B. A.; Stifutkin, A.; Bashindzhagyan, G. L.; Ermolov, P. F.; Golubkov, Yu. A.; Khein, L. A.; Korotkova, N. A.; Korzhavina, I. A.; Kuzmin, V. A.; Lukina, O. Yu.; Proskuryakov, A. S.; Shcheglova, L. M.; Solomin, A. N.; Zotkin, S. A.; Bokel, C.; Botje, M.; Brümmer, N.; Chlebana, F.; Engelen, J.; Koffeman, E.; Kooijman, P.; van Sighem, A.; Tiecke, H.; Tuning, N.; Verkerke, W.; Vossebeld, J.; Vreeswijk, M.; Wiggers, L.; de Wolf, E.; Acosta, D.; Bylsma, B.; Durkin, L. S.; Gilmore, J.; Ginsburg, C. M.; Kim, C. L.; Ling, T. Y.; Nylander, P.; Romanowski, T. A.; Blaikley, H. E.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Devenish, R. C. E.; Edmonds, J. K.; Große-Knetter, J.; Harnew, N.; Lancaster, M.; Nath, C.; Noyes, V. A.; Quadt, A.; Ruske, O.; Tickner, J. R.; Uijterwaal, H.; Walczak, R.; Waters, D. S.; Bertolin, A.; Brugnera, R.; Carlin, R.; dal Corso, F.; Dosselli, U.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Bulmahn, J.; Oh, B. Y.; Okrasiń Ski, J. R.; Toothacker, W. S.; Whitmore, J. J.; Iga, Y.; D'Agostini, G.; Marini, G.; Nigro, A.; Raso, M.; Hart, J. C.; McCubbin, N. A.; Shah, T. P.; Epperson, D.; Heusch, C.; Rahn, J. T.; Sadrozinski, H. F.-W.; Seiden, A.; Wichmann, R.; Williams, D. C.; Schwarzer, O.; Walenta, A. H.; Abramowicz, H.; Briskin, G.; Dagan, S.; Kananov, S.; Levy, A.; Abe, T.; Fusayasu, T.; Inuzuka, M.; Nagano, K.; Umemori, K.; Yamashita, T.; Hamatsu, R.; Hirose, T.; Homma, K.; Kitamura, S.; Matsushita, T.; Cirio, R.; Costa, M.; Ferrero, M. I.; Maselli, S.; Monaco, V.; Peroni, C.; Petrucci, M. C.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Fagerstroem, C.-P.; Galea, R.; Hartner, G. F.; Joo, K. K.; Levman, G. M.; Martin, J. F.; Orr, R. S.; Polenz, S.; Sabetfakhri, A.; Simmons, D.; Teuscher, R. J.; Butterworth, J. M.; Catterall, C. D.; Jones, T. W.; Lane, J. B.; Saunders, R. L.; Shulman, J.; Sutton, M. R.; Wing, M.; Ciborowski, J.; Grzelak, G.; Kasprzak, M.; Muchorowski, K.; Nowak, R. J.; Pawlak, J. M.; Pawlak, R.; Tymieniecka, T.; Wróblewski, A. K.; Zakrzewski, J. A.; Adamus, M.; Coldewey, C.; Eisenberg, Y.; Hochman, D.; Karshon, U.; Badgett, W. F.; Chapin, D.; Cross, R.; Dasu, S.; Foudas, C.; Loveless, R. J.; Mattingly, S.; Reeder, D. D.; Smith, W. H.; Vaiciulis, A.; Wodarczyk, M.; Bhadra, S.; Frisken, W. R.; Khakzad, M.; Schmidke, W. B.

    1997-11-01

    Charged particle production has been measured in deep inelastic scattering (DIS) events over a large range of x and Q2 using the ZEUS detector. The evolution of the scaled momentum, xp, with Q2, in the range 10 to 1280 GeV2, has been investigated in the current fragmentation region of the Breit frame. The results show clear evidence, in a single experiment, for scaling violations in scaled momenta as a function of Q2.

  19. Environmental complexity across scales: mechanism, scaling and the phenomenological fallacy

    NASA Astrophysics Data System (ADS)

    Lovejoy, Shaun

    2015-04-01

    Ever since Van Leeuwenhoek used a microscope to discover "new worlds in a drop of water" we have become used to the idea that "zooming in" - whether in space or in time - will reveal new processes, new phenomena. Yet in the natural environment - geosystems - this is often wrong. For example, in the temporal domain, a recent publication has shown that from hours to hundreds of millions of years the conventional scale bound view of atmospheric variability was wrong by a factor of over a quadrillion (10**15). Mandelbrot challenged the "scale bound" ideology and proposed that many natural systems - including many geosystems - were instead better treated as fractal systems in which the same basic mechanism acts over potentially huge ranges of scale. However, in its original form Mandelbrot's isotropic scaling (self-similar) idea turned out to be too naïve: geosystems are typically anisotropic so that shapes and morphologies (e.g. of clouds landmasses) are not the same at different resolutions. However it turns out that the scaling idea often still applies on condition that the notion of scale is generalized appropriately (using the framework of Generalized Scale Invariance). The overall result is that unique processes, unique dynamical mechanisms may act over huge ranges of scale even though the morphologies systematically change with scale. Therefore the common practice of inferring mechanism from shapes, forms, morphologies is unjustified, the "phenomenological fallacy". We give examples of the phenomenological fallacy drawn from diverse areas of geoscience.

  20. Full Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Installation of Full Scale Tunnel (FST) power plant. Virginia Public Service Company could not supply adequate electricity to run the wind tunnels being built at Langley. (The Propeller Research Tunnel was powered by two submarine diesel engines.) This led to the consideration of a number of different ideas for generating electric power to drive the fan motors in the FST. The main proposition involved two 3000 hp and two 1000 hp diesel engines with directly connected generators. Another, proposition suggested 30 Liberty motors driving 600 hp DC generators in pairs. For a month, engineers at Langley were hopeful they could secure additional diesel engines from decommissioned Navy T-boats but the Navy could not offer a firm commitment regarding the future status of the submarines. By mid-December 1929, Virginia Public Service Company had agreed to supply service to the field at the north end of the King Street Bridge connecting Hampton and Langley Field. Thus, new plans for FST powerplant and motors were made. Smith DeFrance described the motors in NACA TR No. 459: 'The most commonly used power plant for operating a wind tunnel is a direct-current motor and motor-generator set with Ward Leonard control system. For the FST it was found that alternating current slip-ring induction motors, together with satisfactory control equipment, could be purchased for approximately 30 percent less than the direct-current equipment. Two 4000-horsepower slip-ring induction motors with 24 steps of speed between 75 and 300 r.p.m. were therefore installed.'

  1. Contact kinematics of biomimetic scales

    SciTech Connect

    Ghosh, Ranajay; Ebrahimi, Hamid; Vaziri, Ashkan

    2014-12-08

    Dermal scales, prevalent across biological groups, considerably boost survival by providing multifunctional advantages. Here, we investigate the nonlinear mechanical effects of biomimetic scale like attachments on the behavior of an elastic substrate brought about by the contact interaction of scales in pure bending using qualitative experiments, analytical models, and detailed finite element (FE) analysis. Our results reveal the existence of three distinct kinematic phases of operation spanning linear, nonlinear, and rigid behavior driven by kinematic interactions of scales. The response of the modified elastic beam strongly depends on the size and spatial overlap of rigid scales. The nonlinearity is perceptible even in relatively small strain regime and without invoking material level complexities of either the scales or the substrate.

  2. Contact kinematics of biomimetic scales

    NASA Astrophysics Data System (ADS)

    Ghosh, Ranajay; Ebrahimi, Hamid; Vaziri, Ashkan

    2014-12-01

    Dermal scales, prevalent across biological groups, considerably boost survival by providing multifunctional advantages. Here, we investigate the nonlinear mechanical effects of biomimetic scale like attachments on the behavior of an elastic substrate brought about by the contact interaction of scales in pure bending using qualitative experiments, analytical models, and detailed finite element (FE) analysis. Our results reveal the existence of three distinct kinematic phases of operation spanning linear, nonlinear, and rigid behavior driven by kinematic interactions of scales. The response of the modified elastic beam strongly depends on the size and spatial overlap of rigid scales. The nonlinearity is perceptible even in relatively small strain regime and without invoking material level complexities of either the scales or the substrate.

  3. The large-scale landslide risk classification in catchment scale

    NASA Astrophysics Data System (ADS)

    Liu, Che-Hsin; Wu, Tingyeh; Chen, Lien-Kuang; Lin, Sheng-Chi

    2013-04-01

    The landslide disasters caused heavy casualties during Typhoon Morakot, 2009. This disaster is defined as largescale landslide due to the casualty numbers. This event also reflects the survey on large-scale landslide potential is so far insufficient and significant. The large-scale landslide potential analysis provides information about where should be focused on even though it is very difficult to distinguish. Accordingly, the authors intend to investigate the methods used by different countries, such as Hong Kong, Italy, Japan and Switzerland to clarify the assessment methodology. The objects include the place with susceptibility of rock slide and dip slope and the major landslide areas defined from historical records. Three different levels of scales are confirmed necessarily from country to slopeland, which are basin, catchment, and slope scales. Totally ten spots were classified with high large-scale landslide potential in the basin scale. The authors therefore focused on the catchment scale and employ risk matrix to classify the potential in this paper. The protected objects and large-scale landslide susceptibility ratio are two main indexes to classify the large-scale landslide risk. The protected objects are the constructions and transportation facilities. The large-scale landslide susceptibility ratio is based on the data of major landslide area and dip slope and rock slide areas. Totally 1,040 catchments are concerned and are classified into three levels, which are high, medium, and low levels. The proportions of high, medium, and low levels are 11%, 51%, and 38%, individually. This result represents the catchments with high proportion of protected objects or large-scale landslide susceptibility. The conclusion is made and it be the base material for the slopeland authorities when considering slopeland management and the further investigation.

  4. Geometrical scaling for identified particles

    NASA Astrophysics Data System (ADS)

    Praszalowicz, Michal

    2013-12-01

    We show that recently measured transverse momentum spectra of identified particles exhibit geometrical scaling (GS) in scaling variable τ=(( where m=√{m2+pT2}-m. We explore consequences of GS and show that both mid rapidity multiplicity and mean transverse momenta grow as powers of scattering energy. Furthermore, assuming Tsallis-like parametrization of the spectra we calculate the coefficients of this growth. We also show that Tsallis temperature is related to the average saturation scale.

  5. Discrete implementations of scale transform

    NASA Astrophysics Data System (ADS)

    Djurdjanovic, Dragan; Williams, William J.; Koh, Christopher K.

    1999-11-01

    Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysis and interpretation of the signals subject to scale changes. A number of single-dimensional applications of scale concept have been made in speech analysis, processing of biological signals, machine vibration analysis and other areas. Recently, the scale transform was also applied in multi-dimensional signal processing and used for image filtering and denoising. Discrete implementation of the scale transform can be carried out using logarithmic sampling and the well-known fast Fourier transform. Nevertheless, in the case of the uniformly sampled signals, this implementation involves resampling. An algorithm not involving resampling of the uniformly sampled signals has been derived too. In this paper, a modification of the later algorithm for discrete implementation of the direct scale transform is presented. In addition, similar concept was used to improve a recently introduced discrete implementation of the inverse scale transform. Estimation of the absolute discretization errors showed that the modified algorithms have a desirable property of yielding a smaller region of possible error magnitudes. Experimental results are obtained using artificial signals as well as signals evoked from the temporomandibular joint. In addition, discrete implementations for the separable two-dimensional direct and inverse scale transforms are derived. Experiments with image restoration and scaling through two-dimensional scale domain using the novel implementation of the separable two-dimensional scale transform pair are presented.

  6. Surface diagnostics for scale analysis.

    PubMed

    Dunn, S; Impey, S; Kimpton, C; Parsons, S A; Doyle, J; Jefferson, B

    2004-01-01

    Stainless steel, polymethylmethacrylate and polytetrafluoroethylene coupons were analysed for surface topographical and adhesion force characteristics using tapping mode atomic force microscopy and force-distance microscopy techniques. The two polymer materials were surface modified by polishing with silicon carbide papers of known grade. The struvite scaling rate was determined for each coupon and related to the data gained from the surface analysis. The scaling rate correlated well with adhesion force measurements indicating that lower energy materials scale at a lower rate. The techniques outlined in the paper provide a method for the rapid screening of materials in potential scaling applications. PMID:14982180

  7. INTERIOR VIEW SHOWING BATCH SCALES. SERIES OF FIVE SCALES WITH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR VIEW SHOWING BATCH SCALES. SERIES OF FIVE SCALES WITH SIX DIFFERENT MATERIALS. MIX SIFTED DOWN FROM SILOS ABOVE. INGREDIENTS: SAND, SODA ASH, DOLOMITE LIMESTONE, NEPHELINE SYENITE, SALT CAKE. - Chambers-McKee Window Glass Company, Batch Plant, Clay Avenue Extension, Jeannette, Westmoreland County, PA

  8. Validating Large Scale Networks Using Temporary Local Scale Networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  9. The scale invariant generator technique for quantifying anisotropic scale invariance

    NASA Astrophysics Data System (ADS)

    Lewis, G. M.; Lovejoy, S.; Schertzer, D.; Pecknold, S.

    1999-11-01

    Scale invariance is rapidly becoming a new paradigm for geophysics. However, little attention has been paid to the anisotropy that is invariably present in geophysical fields in the form of differential stratification and rotation, texture and morphology. In order to account for scaling anisotropy, the formalism of generalized scale invariance (GSI) was developed. Until now there has existed only a single fairly ad hoc GSI analysis technique valid for studying differential rotation. In this paper, we use a two-dimensional representation of the linear approximation to generalized scale invariance, to obtain a much improved technique for quantifying anisotropic scale invariance called the scale invariant generator technique (SIG). The accuracy of the technique is tested using anisotropic multifractal simulations and error estimates are provided for the geophysically relevant range of parameters. It is found that the technique yields reasonable estimates for simulations with a diversity of anisotropic and statistical characteristics. The scale invariant generator technique can profitably be applied to the scale invariant study of vertical/horizontal and space/time cross-sections of geophysical fields as well as to the study of the texture/morphology of fields.

  10. Drift Scale THM Model

    SciTech Connect

    J. Rutqvist

    2004-10-07

    This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because a sufficient amount of water must be available within a

  11. Involvement in Subject Learning Scale.

    ERIC Educational Resources Information Center

    Bujold, Neree; Saint-Pierre, Henri; Bhushan, Vidya

    1997-01-01

    The Involvement in Subject Learning Scale (ISLS) was developed and validated as an educational outcome measure to be used in assessing higher education quality. The origins and development of the scale, its factor analysis, potential applications, limitations, and pilot use in France and Quebec (Canada) are described. The instrument is appended.…

  12. A Scale of Mobbing Impacts

    ERIC Educational Resources Information Center

    Yaman, Erkan

    2012-01-01

    The aim of this research was to develop the Mobbing Impacts Scale and to examine its validity and reliability analyses. The sample of study consisted of 509 teachers from Sakarya. In this study construct validity, internal consistency, test-retest reliabilities and item analysis of the scale were examined. As a result of factor analysis for…

  13. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  14. Scale Shrinkage in Vertical Equating.

    ERIC Educational Resources Information Center

    Camilli, Gregory; And Others

    1993-01-01

    Three potential causes of scale shrinkage (measurement error, restriction of range, and multidimensionality) in item response theory vertical equating are discussed, and a more comprehensive model-based approach to establishing vertical scales is described. Test data from the National Assessment of Educational Progress are used to illustrate the…

  15. Rating Scale Instruments and Measurement

    ERIC Educational Resources Information Center

    Cavanagh, Robert F.; Romanoski, Joseph T.

    2006-01-01

    The article examines theoretical issues associated with measurement in the human sciences and ensuring data from rating scale instruments are measures. An argument is made that using raw scores from rating scale instruments for subsequent arithmetic operations and applying linear statistics is less preferable than using measures. These theoretical…

  16. Evaluation of Behavioral Expectation Scales.

    ERIC Educational Resources Information Center

    Zedeck, Sheldon; Baker, Henry T.

    Behavioral Expectation Scales developed by Smith and Kendall were evaluated. Results indicated slight interrater reliability between Head Nurses and Supervisors, moderate dependence among five performance dimensions, and correlation between two scales and tenure. Results are discussed in terms of procedural problems, critical incident problems,…

  17. Contrast Analysis for Scale Differences.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; And Others

    Research on tests for scale equality have focused exclusively on an overall test statistic and have not examined procedures for identifying specific differences in multiple group designs. The present study compares four contrast analysis procedures for scale differences in the single factor four-group design: (1) Tukey HSD; (2) Kramer-Tukey; (3)…

  18. Voice, Schooling, Inequality, and Scale

    ERIC Educational Resources Information Center

    Collins, James

    2013-01-01

    The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…

  19. Spiritual Competency Scale: Further Analysis

    ERIC Educational Resources Information Center

    Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.

    2015-01-01

    This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC…

  20. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  1. Important Scaling Parameters for Testing Model-Scale Helicopter Rotors

    NASA Technical Reports Server (NTRS)

    Singleton, Jeffrey D.; Yeager, William T., Jr.

    1998-01-01

    An investigation into the effects of aerodynamic and aeroelastic scaling parameters on model scale helicopter rotors has been conducted in the NASA Langley Transonic Dynamics Tunnel. The effect of varying Reynolds number, blade Lock number, and structural elasticity on rotor performance has been studied and the performance results are discussed herein for two different rotor blade sets at two rotor advance ratios. One set of rotor blades were rigid and the other set of blades were dynamically scaled to be representative of a main rotor design for a utility class helicopter. The investigation was con-densities permits the acquisition of data for several Reynolds and Lock number combinations.

  2. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  3. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  4. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  5. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  6. Scale-dependent halo bias from scale-dependent growth

    SciTech Connect

    Parfrey, Kyle; Hui, Lam; Sheth, Ravi K.

    2011-03-15

    We derive a general expression for the large-scale halo bias, in theories with a scale-dependent linear growth, using the excursion set formalism. Such theories include modified-gravity models, and models in which the dark energy clustering is non-negligible. A scale dependence is imprinted in both the formation and evolved biases by the scale-dependent growth. Mergers are accounted for in our derivation, which thus extends earlier work which focused on passive evolution. There is a simple analytic form for the bias for those theories in which the nonlinear collapse of perturbations is approximately the same as in general relativity. As an illustration, we apply our results to a simple Yukawa modification of gravity, and use Sloan Digital Sky Survey measurements of the clustering of luminous red galaxies to constrain the theory's parameters.

  7. Full-Scale Wind Tunnel

    NASA Technical Reports Server (NTRS)

    1931-01-01

    Construction of Full-Scale Tunnel (FST) balance. Smith DeFrance described the 6-component type balance in NACA TR No. 459 (which also includes a schematic diagram of the balance and its various parts). 'Ball and socket fittings at the top of each of the struts hod the axles of the airplane to be tested; the tail is attached to the triangular frame. These struts are secured to the turntable, which is attached to the floating frame. This frame rests on the struts (next to the concrete piers on all four corners), which transmit the lift forces to the scales (partially visible on the left). The drag linkage is attached to the floating frame on the center line and, working against a known counterweight, transmits the drag force to the scale (center, face out). The cross-wind force linkages are attached to the floating frame on the front and rear sides at the center line. These linkages, working against known counterweights, transmit the cross-wind force to scales (two front scales, face in). In the above manner the forces in three directions are measured and by combining the forces and the proper lever arms, the pitching, rolling, and yawing moments can be computed. The scales are of the dial type and are provided with solenoid-operated printing devices. When the proper test condition is obtained, a push-button switch is momentarily closed and the readings on all seven scales are recorded simultaneously, eliminating the possibility of personal errors.'

  8. Updating the Cognitive Performance Scale.

    PubMed

    Morris, John N; Howard, Elizabeth P; Steel, Knight; Perlman, Christopher; Fries, Brant E; Garms-Homolová, Vjenka; Henrard, Jean-Claude; Hirdes, John P; Ljunggren, Gunnar; Gray, Len; Szczerbińska, Katarzyna

    2016-01-01

    This study presents the first update of the Cognitive Performance Scale (CPS) in 20 years. Its goals are 3-fold: extend category options; characterize how the new scale variant tracks with the Mini-Mental State Examination; and present a series of associative findings. Secondary analysis of data from 3733 older adults from 8 countries was completed. Examination of scale dimensions using older and new items was completed using a forward-entry stepwise regression. The revised scale was validated by examining the scale's distribution with a self-reported dementia diagnosis, functional problems, living status, and distress measures. Cognitive Performance Scale 2 extends the measurement metric from a range of 0 to 6 for the original CPS, to 0 to 8. Relating CPS2 to other measures of function, living status, and distress showed that changes in these external measures correspond with increased challenges in cognitive performance. Cognitive Performance Scale 2 enables repeated assessments, sensitive to detect changes particularly in early levels of cognitive decline. PMID:26251111

  9. Concordance among anticholinergic burden scales

    PubMed Central

    Naples, Jennifer G.; Marcum, Zachary A.; Perera, Subashan; Gray, Shelly L.; Newman, Anne B.; Simonsick, Eleanor M.; Yaffe, Kristine; Shorr, Ronald I.; Hanlon, Joseph T.

    2015-01-01

    Background There is no gold standard to assess potential anticholinergic burden of medications. Objectives To evaluate concordance among five commonly used anticholinergic scales. Design Cross-sectional secondary analysis. Setting Pittsburgh, PA, and Memphis, TN. Participants 3,055 community-dwelling older adults aged 70–79 with baseline medication data from the Health, Aging, and Body Composition study. Measurements Any use, weighted scores, and total standardized daily dosage were calculated using five anticholinergic measures (i.e., Anticholinergic Cognitive Burden [ACB] Scale, Anticholinergic Drug Scale [ADS], Anticholinergic Risk Scale [ARS], Drug Burden Index anticholinergic component [DBI-ACh], and Summated Anticholinergic Medications Scale [SAMS]). Concordance was evaluated with kappa statistics and Spearman rank correlations. Results Any anticholinergic use in rank order was 51% for the ACB, 43% for the ADS, 29% for the DBI-ACh, 23% for the ARS, and 16% for the SAMS. Kappa statistics for all pairwise use comparisons ranged from 0.33 to 0.68. Similarly, concordance as measured by weighted kappa statistics ranged from 0.54 to 0.70 among the three scales not incorporating dosage (ADS, ARS, and ACB). Spearman rank correlation between the DBI-ACh and SAMS was 0.50. Conclusions Only low to moderate concordance was found among the five anticholinergic scales. Future research is needed to examine how these differences in measurement impact their predictive validity with respect to clinically relevant outcomes, such as cognitive impairment. PMID:26480974

  10. SCALING PROPERTIES OF SMALL-SCALE FLUCTUATIONS IN MAGNETOHYDRODYNAMIC TURBULENCE

    SciTech Connect

    Perez, Jean Carlos; Mason, Joanne; Boldyrev, Stanislav; Cattaneo, Fausto E-mail: j.mason@exeter.ac.uk E-mail: cattaneo@flash.uchicago.edu

    2014-09-20

    Magnetohydrodynamic (MHD) turbulence in the majority of natural systems, including the interstellar medium, the solar corona, and the solar wind, has Reynolds numbers far exceeding the Reynolds numbers achievable in numerical experiments. Much attention is therefore drawn to the universal scaling properties of small-scale fluctuations, which can be reliably measured in the simulations and then extrapolated to astrophysical scales. However, in contrast with hydrodynamic turbulence, where the universal structure of the inertial and dissipation intervals is described by the Kolmogorov self-similarity, the scaling for MHD turbulence cannot be established based solely on dimensional arguments due to the presence of an intrinsic velocity scale—the Alfvén velocity. In this Letter, we demonstrate that the Kolmogorov first self-similarity hypothesis cannot be formulated for MHD turbulence in the same way it is formulated for the hydrodynamic case. Besides profound consequences for the analytical consideration, this also imposes stringent conditions on numerical studies of MHD turbulence. In contrast with the hydrodynamic case, the discretization scale in numerical simulations of MHD turbulence should decrease faster than the dissipation scale, in order for the simulations to remain resolved as the Reynolds number increases.

  11. Scale in GIS: An overview

    NASA Astrophysics Data System (ADS)

    Goodchild, Michael F.

    2011-07-01

    Scale has many meanings, but in GIS two are of greatest significance: resolution and extent. Ideally models of physical process would be defined and tested on scale-free data. In practice spatial resolution will always be limited by cost, data volume, and other factors. Raster data are shown to be preferable to vector data for scientific research because they make spatial resolution explicit. The effects of resolution are discussed for two simple GIS functions. Three theoretical frameworks for discussing spatial resolution are introduced and explored. The problems of cross-scale inference, including the modifiable areal unit problem and the ecological fallacy, are described and illustrated.

  12. Deterministic scale-free networks

    NASA Astrophysics Data System (ADS)

    Barabási, Albert-László; Ravasz, Erzsébet; Vicsek, Tamás

    2001-10-01

    Scale-free networks are abundant in nature and society, describing such diverse systems as the world wide web, the web of human sexual contacts, or the chemical network of a cell. All models used to generate a scale-free topology are stochastic, that is they create networks in which the nodes appear to be randomly connected to each other. Here we propose a simple model that generates scale-free networks in a deterministic fashion. We solve exactly the model, showing that the tail of the degree distribution follows a power law.

  13. Semi-scaling cosmic strings

    SciTech Connect

    Vanchurin, Vitaly

    2010-11-01

    We develop a model of string dynamics with back-reaction from both scaling and non-scaling loops taken into account. The evolution of a string network is described by the distribution functions of coherence segments and kinks. We derive two non-linear equations which govern the evolution of the two distributions and solve them analytically in the limit of late times. We also show that the correlation function is an exponential, and solve the dynamics for the corresponding spectrum of scaling loops.

  14. Trends in Analytical Scale Separations.

    ERIC Educational Resources Information Center

    Jorgenson, James W.

    1984-01-01

    Discusses recent developments in the instrumentation and practice of analytical scale operations. Emphasizes detection devices and procedures in gas chromatography, liquid chromatography, electrophoresis, supercritical fluid chromatography, and field-flow fractionation. (JN)

  15. Gribov copies and anomalous scaling

    SciTech Connect

    Holdom, B.

    2008-12-15

    Nonperturbative and lattice methods indicate that Gribov copies modify the infrared behavior of gauge theories and cause a suppression of gluon propagation. We investigate whether this can be implemented in a modified perturbation theory. The minimal modification proceeds via a nonlocal generalization of the Fadeev-Popov ghost that automatically decouples from physical states. The expected scale invariance of the physics associated with Gribov copies leads to the emergence of a nontrivial infrared fixed point. For a range of a scaling exponent the gauge bosons exhibit unparticlelike behavior in the infrared. The confining regime of interest for QCD requires a larger scaling exponent, but then the severity of ghost dominance upsets naive power counting for the infrared scaling behavior of amplitudes.

  16. Fluid dynamics: Swimming across scales

    NASA Astrophysics Data System (ADS)

    Baumgart, Johannes; Friedrich, Benjamin M.

    2014-10-01

    The myriad creatures that inhabit the waters of our planet all swim using different mechanisms. Now, a simple relation links key physical observables of underwater locomotion, on scales ranging from millimetres to tens of metres.

  17. Constructing cities, deconstructing scaling laws

    PubMed Central

    Arcaute, Elsa; Hatna, Erez; Ferguson, Peter; Youn, Hyejin; Johansson, Anders; Batty, Michael

    2015-01-01

    Cities can be characterized and modelled through different urban measures. Consistency within these observables is crucial in order to advance towards a science of cities. Bettencourt et al. have proposed that many of these urban measures can be predicted through universal scaling laws. We develop a framework to consistently define cities, using commuting to work and population density thresholds, and construct thousands of realizations of systems of cities with different boundaries for England and Wales. These serve as a laboratory for the scaling analysis of a large set of urban indicators. The analysis shows that population size alone does not provide us enough information to describe or predict the state of a city as previously proposed, indicating that the expected scaling laws are not corroborated. We found that most urban indicators scale linearly with city size, regardless of the definition of the urban boundaries. However, when nonlinear correlations are present, the exponent fluctuates considerably. PMID:25411405

  18. Pilot Scale Advanced Fogging Demonstration

    SciTech Connect

    Demmer, Rick L.; Fox, Don T.; Archiblad, Kip E.

    2015-01-01

    Experiments in 2006 developed a useful fog solution using three different chemical constituents. Optimization of the fog recipe and use of commercially available equipment were identified as needs that had not been addressed. During 2012 development work it was noted that low concentrations of the components hampered coverage and drying in the United Kingdom’s National Nuclear Laboratory’s testing much more so than was evident in the 2006 tests. In fiscal year 2014 the Idaho National Laboratory undertook a systematic optimization of the fogging formulation and conducted a non-radioactive, pilot scale demonstration using commercially available fogging equipment. While not as sophisticated as the equipment used in earlier testing, the new approach is much less expensive and readily available for smaller scale operations. Pilot scale testing was important to validate new equipment of an appropriate scale, optimize the chemistry of the fogging solution, and to realize the conceptual approach.

  19. Scaling behaviour of entropy estimates

    NASA Astrophysics Data System (ADS)

    Schürmann, Thomas

    2002-02-01

    Entropy estimation of information sources is highly non-trivial for symbol sequences with strong long-range correlations. The rabbit sequence, related to the symbolic dynamics of the nonlinear circle map at the critical point as well as the logistic map at the Feigenbaum point, is known to produce long memory tails. For both dynamical systems the scaling behaviour of the block entropy of order n has been shown to increase ∝log n. In contrast to such probabilistic concepts, we investigate the scaling behaviour of certain non-probabilistic entropy estimation schemes suggested by Lempel and Ziv (LZ) in the context of algorithmic complexity and data compression. These are applied in a sequential manner with the scaling variable being the length N of the sequence. We determine the scaling law for the LZ entropy estimate applied to the case of the critical circle map and the logistic map at the Feigenbaum point in a binary partition.

  20. Inflation in the scaling limit

    SciTech Connect

    Matarrese, S.; Ortolan, A.; Lucchin, F.

    1989-07-15

    We investigate the stochastic dynamics of the/ital inflaton/ for a wide class of potentials leading either tochaotic or to power-law inflation.At late times the system enters a /ital scaling/ /ital regime/where macroscopic order sets in: the field distribution sharply peaksaround the classical slow-rollover configuration and curvature perturbationsoriginate with a non-Gaussian scale-invariant statistics.

  1. Distributional Scaling in Heterogeneous Aquifers

    NASA Astrophysics Data System (ADS)

    Polsinelli, J. F.

    2015-12-01

    An investigation is undertaken into the fractal scaling properties of the piezometric head in a heterogeneous unconfined aquifer. The governing equations for the unconfined flow are derived from conservation of mass and the Darcy law. The Dupuit approximation will be used to model the dynamics. The spatially varying nature of the tendency to conduct flow (e.g. the hydraulic conductivity) is represented as a stochastic process. Experimental studies in the literature have indicated that the conductivity belongs to a class of non-stationary stochastic fields, called H-ss fields. The uncertainty in the soil parameters is imparted onto the flow variables; in groundwater investigations the potentiometric head will be a random function. The structure of the head field will be analyzed with an emphasis on the scaling properties. The scaling scheme for the modeling equations and the simulation procedure for the saturated hydraulic conductivity process will be explained, then the method will be validated through numerical experimentation using the USGS Modflow-2005 software. The results of the numerical simulations demonstrate that the head will exhibit multi-fractal scaling if the hydraulic conductivity exhibits multi-fractal scaling and the differential equations for the groundwater equation satisfy a particular set of scale invariance conditions.

  2. Scaling issues for biodiversity protection

    SciTech Connect

    Pearson, S.M.; Turner, M.G.; Gardner, R.H.; O`Neill, R.V.

    1992-08-01

    Environmental heterogeneity, in both space and time, has been important in the evolution and maintenance of biodiversity. Moreover, this heterogeneity is hierarchical in nature. Differences occur between biomes, between landscapes. Thus, hierarchical patterns of heterogeneity are a consequence of the the complexity within ecological communities, and the maintenance of biodiversity means the preservation of this complexity. Natural landscapes are dynamic systems that exhibit temporal and spatial heterogeneity. However, the exploitative nature of human activity tends to simplify landscapes (Krummel et al. 1987). The challenge of preserving biodiversity in managed landscapes is to incorporate natural levels of spatial and temporal heterogeneity into management schemes. The concept of scale has emerged as an important topic among ecologists that recognize the role of heterogeneity in natural ecosystems. Subjects related to scale such as grain (level of detail) and extent (size of area or duration of time) are frequently used to determine the appropriate interpretation of ecological data. Likewise, scale is important when applying ecological principles to biodiversity protection and conservation. The scale of a conservation endeavor affects the strategy involved, realistic goals, and probability of success. For instance, the spatial extent of a reserve system may be determined, for better or worse, by biogeography, distribution of surviving populations, political boundaries, or fiscal constraints. Our objectives are to: emphasize the importance of natural patterns of spatial and temporal heterogeneity, encourage a broader-scale perspective for conservation efforts, and illustrate the interaction between landscape-level heterogeneity and organism-based scales of resource utilization with a simulation experiment.

  3. Scaling issues for biodiversity protection

    SciTech Connect

    Pearson, S.M.; Turner, M.G.; Gardner, R.H.; O'Neill, R.V.

    1992-01-01

    Environmental heterogeneity, in both space and time, has been important in the evolution and maintenance of biodiversity. Moreover, this heterogeneity is hierarchical in nature. Differences occur between biomes, between landscapes. Thus, hierarchical patterns of heterogeneity are a consequence of the the complexity within ecological communities, and the maintenance of biodiversity means the preservation of this complexity. Natural landscapes are dynamic systems that exhibit temporal and spatial heterogeneity. However, the exploitative nature of human activity tends to simplify landscapes (Krummel et al. 1987). The challenge of preserving biodiversity in managed landscapes is to incorporate natural levels of spatial and temporal heterogeneity into management schemes. The concept of scale has emerged as an important topic among ecologists that recognize the role of heterogeneity in natural ecosystems. Subjects related to scale such as grain (level of detail) and extent (size of area or duration of time) are frequently used to determine the appropriate interpretation of ecological data. Likewise, scale is important when applying ecological principles to biodiversity protection and conservation. The scale of a conservation endeavor affects the strategy involved, realistic goals, and probability of success. For instance, the spatial extent of a reserve system may be determined, for better or worse, by biogeography, distribution of surviving populations, political boundaries, or fiscal constraints. Our objectives are to: emphasize the importance of natural patterns of spatial and temporal heterogeneity, encourage a broader-scale perspective for conservation efforts, and illustrate the interaction between landscape-level heterogeneity and organism-based scales of resource utilization with a simulation experiment.

  4. Linking scales through numerical simulations

    NASA Astrophysics Data System (ADS)

    Lunati, I.

    2012-12-01

    Field-scale models of flow through porous media rely on a continuum description, which disregard pore-scale details and focus on macroscopic effects. As it is always the case, this choice is quite effective in reducing the number of model parameters, but this comes at expenses of an inherent loss of information and generality. Models based on Darcy's law, for instance, require spatial and temporal scale separation (locality and equilibrium). Although these conditions are generally met for single-phase flow, multiphase flow is far more complex: the interaction between nonlinearity of the interface behavior and the pore structure (disorder) creates a variety of flow regimes for which scale separation does not hold. In recent years, the increased computational power has led to a revival of pore-scale modeling in order to overcome this issue and describe the flow at the scale in which it physically occurs. If appropriate techniques are chosen, it is possible to use numerical simulations to complement experimental observations and advance our understanding of multiphase flow. By means of examples, we discuss the role played by these models in contributing to solve open problems and in devising alternatives to the standard description of flow through porous media.

  5. Scaling of extreme rainfall areas at a planetary scale.

    PubMed

    Devineni, Naresh; Lall, Upmanu; Xi, Chen; Ward, Philip

    2015-07-01

    Event magnitude and area scaling relationships for rainfall over different regions of the world have been presented in the literature for relatively short durations and over relatively small areas. In this paper, we present the first ever results on a global analysis of the scaling characteristics of extreme rainfall areas for durations ranging from 1 to 30 days. Broken power law models are fit in each case. The past work has been focused largely on the time and space scales associated with local and regional convection. The work presented here suggests that power law scaling may also apply to planetary scale phenomenon, such as frontal and monsoonal systems, and their interaction with local moisture recycling. Such features may have persistence over large areas corresponding to extreme rain and regional flood events. As a result, they lead to considerable hazard exposure. A caveat is that methods used for empirical power law identification have difficulties with edge effects due to finite domains. This leads to problems with robust model identification and interpretability of the underlying relationships. We use recent algorithms that aim to address some of these issues in a principled way. Theoretical research that could explain why such results may emerge across the world, as analyzed for the first time in this paper, is needed. PMID:26232980

  6. Scale-Dependent Dispersivity Explained Without Scale-Dependent Heterogeneity

    NASA Astrophysics Data System (ADS)

    Dhaliwal, P.; Engdahl, N. B.; Fogg, G. E.

    2011-12-01

    The observed scale-dependence of dispersivity has often been attributed to the scale-dependence of porous media heterogeneity. However, mass transfer between areas of high and low hydraulic conductivity and preferential solute migration may provide an alternative explanation for this phenomenon. To illustrate this point, we used geostatistical models representing the heterogeneity and interconnectedness of a typical aquifer system and plume modeling via a highly accurate random walk particle tracking method. The apparent dispersivity values were calculated using the statistical moments of the plumes. Apparent dispersivity was seen to grow from 0.01(m)to 100(m) over length scales of 0.06(m) to 500(m) even though heterogeneity scales and facies proportions were stationary and invariant with scale in the simulations. The results suggest that the increase in dispersivity was due solely to a stretching of the plume by two mechanisms. The first mechanism results from the diffusion of solute into areas of low conductivity and the second comes from the movement of solute through well-connected high K zone channels. Under such conditions, an "asymptotic dispersivity" may never be reached.

  7. Scaling of extreme rainfall areas at a planetary scale

    NASA Astrophysics Data System (ADS)

    Devineni, Naresh; Lall, Upmanu; Xi, Chen; Ward, Philip

    2015-07-01

    Event magnitude and area scaling relationships for rainfall over different regions of the world have been presented in the literature for relatively short durations and over relatively small areas. In this paper, we present the first ever results on a global analysis of the scaling characteristics of extreme rainfall areas for durations ranging from 1 to 30 days. Broken power law models are fit in each case. The past work has been focused largely on the time and space scales associated with local and regional convection. The work presented here suggests that power law scaling may also apply to planetary scale phenomenon, such as frontal and monsoonal systems, and their interaction with local moisture recycling. Such features may have persistence over large areas corresponding to extreme rain and regional flood events. As a result, they lead to considerable hazard exposure. A caveat is that methods used for empirical power law identification have difficulties with edge effects due to finite domains. This leads to problems with robust model identification and interpretability of the underlying relationships. We use recent algorithms that aim to address some of these issues in a principled way. Theoretical research that could explain why such results may emerge across the world, as analyzed for the first time in this paper, is needed.

  8. Practice Variation in Spontaneous Breathing Trial Performance and Reporting

    PubMed Central

    Godard, Stephanie; Herry, Christophe; Westergaard, Paul; Scales, Nathan; Brown, Samuel M.; Burns, Karen; Mehta, Sangeeta; Jacono, Frank J.; Kubelik, Dalibor; Maziak, Donna E.; Marshall, John; Martin, Claudio; Seely, Andrew J. E.

    2016-01-01

    Background. Spontaneous breathing trials (SBTs) are standard of care in assessing extubation readiness; however, there are no universally accepted guidelines regarding their precise performance and reporting. Objective. To investigate variability in SBT practice across centres. Methods. Data from 680 patients undergoing 931 SBTs from eight North American centres from the Weaning and Variability Evaluation (WAVE) observational study were examined. SBT performance was analyzed with respect to ventilatory support, oxygen requirements, and sedation level using the Richmond Agitation Scale Score (RASS). The incidence of use of clinical extubation criteria and changes in physiologic parameters during an SBT were assessed. Results. The majority (80% and 78%) of SBTs used 5 cmH2O of ventilator support, although there was variability. A significant range in oxygenation was observed. RASS scores were variable, with RASS 0 ranging from 29% to 86% and 22% of SBTs performed in sedated patients (RASS < −2). Clinical extubation criteria were heterogeneous among centres. On average, there was no change in physiological variables during SBTs. Conclusion. The present study highlights variation in SBT performance and documentation across and within sites. With their impact on the accuracy of outcome prediction, these results support efforts to further clarify and standardize optimal SBT technique.

  9. The Internet Gaming Disorder Scale.

    PubMed

    Lemmens, Jeroen S; Valkenburg, Patti M; Gentile, Douglas A

    2015-06-01

    Recently, the American Psychiatric Association included Internet gaming disorder (IGD) in the appendix of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). The main aim of the current study was to test the reliability and validity of 4 survey instruments to measure IGD on the basis of the 9 criteria from the DSM-5: a long (27-item) and short (9-item) polytomous scale and a long (27-item) and short (9-item) dichotomous scale. The psychometric properties of these scales were tested among a representative sample of 2,444 Dutch adolescents and adults, ages 13-40 years. Confirmatory factor analyses demonstrated that the structural validity (i.e., the dimensional structure) of all scales was satisfactory. Both types of assessment (polytomous and dichotomous) were also reliable (i.e., internally consistent) and showed good criterion-related validity, as indicated by positive correlations with time spent playing games, loneliness, and aggression and negative correlations with self-esteem, prosocial behavior, and life satisfaction. The dichotomous 9-item IGD scale showed solid psychometric properties and was the most practical scale for diagnostic purposes. Latent class analysis of this dichotomous scale indicated that 3 groups could be discerned: normal gamers, risky gamers, and disordered gamers. On the basis of the number of people in this last group, the prevalence of IGD among 13- through 40-year-olds in the Netherlands is approximately 4%. If the DSM-5 threshold for diagnosis (experiencing 5 or more criteria) is applied, the prevalence of disordered gamers is more than 5%. PMID:25558970

  10. SETI and astrobiology: The Rio Scale and the London Scale

    NASA Astrophysics Data System (ADS)

    Almár, Iván

    2011-11-01

    The public reaction to a discovery, the character of the corresponding risk communication, as well as the possible impact on science and society all depend on the character of the phenomenon discovered, on the method of discovery, on the distance to the phenomenon and, last but not least, on the reliability of the announcement itself. The Rio Scale - proposed together with Jill Tarter just a decade ago at an IAA symposium in Rio de Janeiro - attempts to quantify the relative importance of such a “low probability, high consequence event”, namely the announcement of an ETI discovery. After the publication of the book “The Eerie Silence” by Paul Davies it is necessary to control how the recently suggested possible “technosignatures” or “technomarkers” mentioned in this book could be evaluated by the Rio Scale. The new London Scale, proposed at the Royal Society meeting in January 2010, in London, is a similar attempt to quantify the impact of an announcement regarding the discovery of ET life on an analogous ordinal scale between zero and ten. Here again the new concept of a “shadow biosphere” raised in this book deserves a special attention since a “weird form of life” found on Earth would not necessarily have an extraterrestrial origin, nevertheless it might be an important discovery in itself. Several arguments are presented that methods, aims and targets of “search for ET life” and “search for ET intelligence” are recently converging. The new problem is raised whether a unification of these two scales is necessary as a consequence of the convergence of the two subjects. Finally, it is suggested that experts in social sciences should take the structure of the respective scales into consideration when investigating case by case the possible effects on the society of such discoveries.

  11. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-05-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  12. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  13. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  14. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  15. Single-scale natural SUSY

    NASA Astrophysics Data System (ADS)

    Randall, Lisa; Reece, Matthew

    2013-08-01

    We consider the prospects for natural SUSY models consistent with current data. Recent constraints make the standard paradigm unnatural so we consider what could be a minimal extension consistent with what we now know. The most promising such scenarios extend the MSSM with new tree-level Higgs interactions that can lift its mass to at least 125 GeV and also allow for flavor-dependent soft terms so that the third generation squarks are lighter than current bounds on the first and second generation squarks. We argue that a common feature of almost all such models is the need for a new scale near 10 TeV, such as a scale of Higgsing or confinement of a new gauge group. We consider the question whether such a model can naturally derive from a single mass scale associated with supersymmetry breaking. Most such models simply postulate new scales, leaving their proximity to the scale of MSSM soft terms a mystery. This coincidence problem may be thought of as a mild tuning, analogous to the usual μ problem. We find that a single mass scale origin is challenging, but suggest that a more natural origin for such a new dynamical scale is the gravitino mass, m 3/2, in theories where the MSSM soft terms are a loop factor below m 3/2. As an example, we build a variant of the NMSSM where the singlet S is composite, and the strong dynamics leading to compositeness is triggered by masses of order m 3/2 for some fields. Our focus is the Higgs sector, but our model is compatible with a light stop (either with the first and second generation squarks heavy, or with R-parity violation or another mechanism to hide them from current searches). All the interesting low-energy mass scales, including linear terms for S playing a key role in EWSB, arise dynamically from the single scale m 3/2. However, numerical coefficients from RG effects and wavefunction factors in an extra dimension complicate the otherwise simple story.

  16. Scaling Effect In Trade Network

    NASA Astrophysics Data System (ADS)

    Konar, M.; Lin, X.; Rushforth, R.; Ruddell, B. L.; Reimer, J.

    2015-12-01

    Scaling is an important issue in the physical sciences. Economic trade is increasingly of interest to the scientific community due to the natural resources (e.g. water, carbon, nutrients, etc.) embodied in traded commodities. Trade refers to the spatial and temporal redistribution of commodities, and is typically measured annually between countries. However, commodity exchange networks occur at many different scales, though data availability at finer temporal and spatial resolution is rare. Exchange networks may prove an important adaptation measure to cope with future climate and economic shocks. As such, it is essential to understand how commodity exchange networks scale, so that we can understand opportunities and roadblocks to the spatial and temporal redistribution of goods and services. To this end, we present an empirical analysis of trade systems across three spatial scales: global, sub-national in the United States, and county-scale in the United States. We compare and contrast the network properties, the self-sufficiency ratio, and performance of the gravity model of trade for these three exchange systems.

  17. Coping with Multiple Sclerosis Scale

    PubMed Central

    Parkerson, Holly A.; Kehler, Melissa D.; Sharpe, Donald

    2016-01-01

    Background: The Coping with Multiple Sclerosis Scale (CMSS) was developed to assess coping strategies specific to multiple sclerosis (MS). Despite its wide application in MS research, psychometric support for the CMSS remains limited to the initial factor analytic investigation by Pakenham in 2001. Methods: The current investigation assessed the factor structure and construct validity of the CMSS. Participants with MS (N = 453) completed the CMSS, as well as measures of disability related to MS (Multiple Sclerosis Impact Scale), quality of life (World Health Organization Quality of Life Brief Scale), and anxiety and depression (Hospital Anxiety and Depression Scale). Results: The original factor structure reported by Pakenham was a poor fit to the data. An alternate seven-factor structure was identified using exploratory factor analysis. Although there were some similarities with the existing CMSS subscales, differences in factor content and item loadings were found. Relationships between the revised CMSS subscales and additional measures were assessed, and the findings were consistent with previous research. Conclusions: Refinement of the CMSS is suggested, especially for subscales related to acceptance and avoidance strategies. Until further research is conducted on the revised CMSS, it is recommended that the original CMSS continue to be administered. Clinicians and researchers should be mindful of lack of support for the acceptance and avoidance subscales and should seek additional scales to assess these areas. PMID:27551244

  18. Strength Scaling in Fiber Composites

    NASA Technical Reports Server (NTRS)

    Kellas, Sotiris; Morton, John

    1990-01-01

    A research program was initiated to study and isolate the factors responsible for scale effects in the tensile strength of graphite/epoxy composite laminates. Four layups were chosen with appropriate stacking sequences so as to highlight individual and interacting failure modes. Four scale sizes were selected for investigation including full scale size, 3/4, 2/4, and 1/4, with n = to 4, 3, 2, and 1, respectively. The full scale specimen sizes was 32 piles thick as compared to 24, 16, and 8 piles for the 3/4, 2/4, and 1/4 specimen sizes respectively. Results were obtained in the form of tensile strength, stress-strain curves and damage development. Problems associated with strength degradation with increasing specimen sizes are isolated and discussed. Inconsistencies associated with strain measurements were also identified. Enhanced x ray radiography was employed for damage evaluation, following step loading. It was shown that fiber dominated layups were less sensitive to scaling effects compared to the matrix dominated layups.

  19. Strength scaling in fiber composites

    NASA Technical Reports Server (NTRS)

    Kellas, Sotiris; Morton, John

    1991-01-01

    A research program was initiated to study and isolate the factors responsible for scale effects in the tensile strength of graphite/epoxy composite laminates. Four layups were chosen with appropriate stacking sequences so as to highlight individual and interacting failure modes. Four scale sizes were selected for investigation including full scale size, 3/4, 2/4, and 1/4, with n = to 4, 3, 2, and 1, respectively. The full scale specimen sizes was 32 piles thick as compared to 24, 16, and 8 piles for the 3/4, 2/4, and 1/4 specimen sizes respectively. Results were obtained in the form of tensile strength, stress-strain curves and damage development. Problems associated with strength degradation with increasing specimen sizes are isolated and discussed. Inconsistencies associated with strain measurements were also identified. Enchanced X-ray radiography was employed for damage evaluation, following step loading. It was shown that fiber dominated layups were less sensitive to scaling effects compared to the matrix dominated layups.

  20. Definition of a nucleophilicity scale.

    PubMed

    Jaramillo, Paula; Pérez, Patricia; Contreras, Renato; Tiznado, William; Fuentealba, Patricio

    2006-07-01

    This work deals with exploring some empirical scales of nucleophilicity. We have started evaluating the experimental indices of nucleophilicity proposed by Legon and Millen on the basis of the measure of the force constants derived from vibrational frequencies using a probe dipole H-X (X = F,CN). The correlation among some theoretical parameters with this experimental scale has been evaluated. The theoretical parameters have been chosen as the minimum of the electrostatic potential V(min), the binding energy (BE) between the nucleophile and the H-X dipole, and the electrostatic potential measured at the position of the hydrogen atom V(H) when the complex nucleophile and dipole H-X is in the equilibrium geometry. All of them present good correlations with the experimental nucleophilicity scale. In addition, the BEs of the nucleophiles with two other Lewis acids (one hard, BF(3), and the other soft, BH(3)) have been evaluated. The results suggest that the Legon and Millen nucleophilicity scale and the electrostatic potential derived scales can describe in good approximation the reactivity order of the nucleophiles only when the interactions with a probe electrophile is of the hard-hard type. For a covalent interaction that is orbital controlled, a new nucleophilicity index using information of the frontier orbitals of both, the nucleophile and the electrophile has been proposed. PMID:16805506

  1. The scaling of attention networks

    NASA Astrophysics Data System (ADS)

    Wang, Cheng-Jun; Wu, Lingfei

    2016-04-01

    We use clicks as a proxy of collective attention and construct networks to study the temporal dynamics of attention. In particular we collect the browsing records of millions of users on 1000 Web forums in two months. In the constructed networks, nodes are threads and edges represent the switch of users between threads in an hour. The investigated network properties include the number of threads N, the number of users UV, and the number of clicks, PV. We find scaling functions PV ∼ UV θ1, PV ∼N θ3, and UV ∼N θ2, in which the scaling exponents are always greater than 1. This means that (1) the studied networks maintain a self-similar flow structure in time, i.e., large networks are simply the scale-up versions of small networks; and (2) large networks are more "productive", in the sense that an average user would generate more clicks in the larger systems. We propose a revised version of Zipf's law to quantify the time-invariant flow structure of attention networks and relate it to the observed scaling properties. We also demonstrate the applied consequences of our research: forum-classification based on scaling properties.

  2. Scales of Natural Flood Management

    NASA Astrophysics Data System (ADS)

    Nicholson, Alex; Quinn, Paul; Owen, Gareth; Hetherington, David; Piedra Lara, Miguel; O'Donnell, Greg

    2016-04-01

    The scientific field of Natural flood Management (NFM) is receiving much attention and is now widely seen as a valid solution to sustainably manage flood risk whilst offering significant multiple benefits. However, few examples exist looking at NFM on a large scale (>10km2). Well-implemented NFM has the effect of restoring more natural catchment hydrological and sedimentological processes, which in turn can have significant flood risk and WFD benefits for catchment waterbodies. These catchment scale improvements in-turn allow more 'natural' processes to be returned to rivers and streams, creating a more resilient system. Although certain NFM interventions may appear distant and disconnected from main stem waterbodies, they will undoubtedly be contributing to WFD at the catchment waterbody scale. This paper offers examples of NFM, and explains how they can be maximised through practical design across many scales (from feature up to the whole catchment). New tools to assist in the selection of measures and their location, and to appreciate firstly, the flooding benefit at the local catchment scale and then show a Flood Impact Model that can best reflect the impacts of local changes further downstream. The tools will be discussed in the context of our most recent experiences on NFM projects including river catchments in the north east of England and in Scotland. This work has encouraged a more integrated approach to flood management planning that can use both traditional and novel NFM strategies in an effective and convincing way.

  3. Visions of Atomic Scale Tomography

    SciTech Connect

    Kelly, T. F.; Miller, Michael K; Rajan, Krishna; Ringer, S. P.

    2012-01-01

    A microscope, by definition, provides structural and analytical information about objects that are too small to see with the unaided eye. From the very first microscope, efforts to improve its capabilities and push them to ever-finer length scales have been pursued. In this context, it would seem that the concept of an ultimate microscope would have received much attention by now; but has it really ever been defined? Human knowledge extends to structures on a scale much finer than atoms, so it might seem that a proton-scale microscope or a quark-scale microscope would be the ultimate. However, we argue that an atomic-scale microscope is the ultimate for the following reason: the smallest building block for either synthetic structures or natural structures is the atom. Indeed, humans and nature both engineer structures with atoms, not quarks. So far as we know, all building blocks (atoms) of a given type are identical; it is the assembly of the building blocks that makes a useful structure. Thus, would a microscope that determines the position and identity of every atom in a structure with high precision and for large volumes be the ultimate microscope? We argue, yes. In this article, we consider how it could be built, and we ponder the answer to the equally important follow-on questions: who would care if it is built, and what could be achieved with it?

  4. Featured Invention: Laser Scaling Device

    NASA Technical Reports Server (NTRS)

    Dunn, Carol Anne

    2008-01-01

    In September 2003, NASA signed a nonexclusive license agreement with Armor Forensics, a subsidiary of Armor Holdings, Inc., for the laser scaling device under the Innovative Partnerships Program. Coupled with a measuring program, also developed by NASA, the unit provides crime scene investigators with the ability to shoot photographs at scale without having to physically enter the scene, analyzing details such as bloodspatter patterns and graffiti. This ability keeps the scene's components intact and pristine for the collection of information and evidence. The laser scaling device elegantly solved a pressing problem for NASA's shuttle operations team and also provided industry with a useful tool. For NASA, the laser scaling device is still used to measure divots or damage to the shuttle's external tank and other structures around the launchpad. When the invention also met similar needs within industry, the Innovative Partnerships Program provided information to Armor Forensics for licensing and marketing the laser scaling device. Jeff Kohler, technology transfer agent at Kennedy, added, "We also invited a representative from the FBI's special photography unit to Kennedy to meet with Armor Forensics and the innovator. Eventually the FBI ended up purchasing some units. Armor Forensics is also beginning to receive interest from DoD [Department of Defense] for use in military crime scene investigations overseas."

  5. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  6. Hidden scale invariance of metals

    NASA Astrophysics Data System (ADS)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.; Pedersen, Ulf R.

    2015-11-01

    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general "hidden" scale invariance of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant inverse power-law (IPL) pair interactions. However, crystal packings of several transition metals (V, Cr, Mn, Fe, Nb, Mo, Ta, W, and Hg), most post-transition metals (Ga, In, Sn, and Tl), and the metalloids Si and Ge cannot be explained by the IPL assumption. The virial-energy correlation coefficients of iron and phosphorous are shown to increase at elevated pressures. Finally, we discuss how scale invariance explains the Grüneisen equation of state and a number of well-known empirical melting and freezing rules.

  7. Urban Transfer Entropy across Scales

    PubMed Central

    Murcio, Roberto

    2015-01-01

    The morphology of urban agglomeration is studied here in the context of information exchange between different spatio-temporal scales. Urban migration to and from cities is characterised as non-random and following non-random pathways. Cities are multidimensional non-linear phenomena, so understanding the relationships and connectivity between scales is important in determining how the interplay of local/regional urban policies may affect the distribution of urban settlements. In order to quantify these relationships, we follow an information theoretic approach using the concept of Transfer Entropy. Our analysis is based on a stochastic urban fractal model, which mimics urban growing settlements and migration waves. The results indicate how different policies could affect urban morphology in terms of the information generated across geographical scales. PMID:26207628

  8. Softness Correlations Across Length Scales

    NASA Astrophysics Data System (ADS)

    Ivancic, Robert; Shavit, Amit; Rieser, Jennifer; Schoenholz, Samuel; Cubuk, Ekin; Durian, Douglas; Liu, Andrea; Riggleman, Robert

    In disordered systems, it is believed that mechanical failure begins with localized particle rearrangements. Recently, a machine learning method has been introduced to identify how likely a particle is to rearrange given its local structural environment, quantified by softness. We calculate the softness of particles in simulations of atomic Lennard-Jones mixtures, molecular Lennard-Jones oligomers, colloidal systems and granular systems. In each case, we find that the length scale characterizing spatial correlations of softness is approximately a particle diameter. These results provide a rationale for why localized rearrangements--whose size is presumably set by the scale of softness correlations--might occur in disordered systems across many length scales. Supported by DOE DE-FG02-05ER46199.

  9. Interactive Enhancement Of Tone Scale

    NASA Astrophysics Data System (ADS)

    Troxel, Donald E.; Schreiber, William F.; Burzinski, Nancy J.; Matson, Mark D.

    1982-10-01

    The tone scale or gradation of a continuous tone picture is the most important factor related to the quality of an image. We have developed special purpose analog and digital circuitry that enables the real-time (30 updates per second) computation of a tone scale transformation which is then applied to a digitized picture being displayed on a television monitor. In our system the tone scale transformations are controlled by knobs that are labeled in terms meaningful to photographic artisans, rather than requiring an operator to specify points on a transfer characteristic, as is common with other systems. These knobs directly specify minimum and maximum densities, brightness, and shadow, highlight, and overall contrast. These control parameters may be selectively enabled by the operator. After the appropriate aesthetic modifications have been achieved on the television display, the operator may initiate the transformation of the complete stored image prior to subsequent computer processing or hard copy output.

  10. Interactive Enhancement Of Tone Scale

    NASA Astrophysics Data System (ADS)

    Troxel, Donald E.; Schreiber, William F.; Burzinski, Nancy J.; Matson, Mark D.

    1982-07-01

    The tone scale or gradation of a continuous tone picture is the most important factor related to the quality of an image. We have developed special purpose analog and digital circuitry which enables the real-time (30 updates per second) computation of a tone scale transformation which is then applied to a digitized picture being displayed on a television monitor. In our system the tone scale transformations are controlled by knobs which are labelled in terms meaningful to photographic artisans, rather than requiring an operator to specify points on a transfer characteristic as is common with other systems. These knobs directly specify minimum and maximum densities, brightness, and shadow, highlight and overall contrast. These control parameters may be selectively enabled by the operator. After the appropriate aesthetic modifications have been achieved on the television display, the operator may initiate the transformation of the complete stored image prior to subsequent computer processing or hard copy output.

  11. Flavor hierarchies from dynamical scales

    NASA Astrophysics Data System (ADS)

    Panico, Giuliano; Pomarol, Alex

    2016-07-01

    One main obstacle for any beyond the SM (BSM) scenario solving the hierarchy problem is its potentially large contributions to electric dipole moments. An elegant way to avoid this problem is to have the light SM fermions couple to the BSM sector only through bilinears, overline{f}f . This possibility can be neatly implemented in composite Higgs models. We study the implications of dynamically generating the fermion Yukawa couplings at different scales, relating larger scales to lighter SM fermions. We show that all flavor and CP-violating constraints can be easily accommodated for a BSM scale of few TeV, without requiring any extra symmetry. Contributions to B physics are mainly mediated by the top, giving a predictive pattern of deviations in Δ F = 2 and Δ F = 1 flavor observables that could be seen in future experiments.

  12. Dynamic scaling in spin glasses

    NASA Astrophysics Data System (ADS)

    Pappas, C.; Mezei, F.; Ehlers, G.; Manuel, P.; Campbell, I. A.

    2003-08-01

    We present neutron spin echo (NSE) results and a revisited analysis of historical data on spin glasses, which reveal a pure power-law time decay of the spin autocorrelation function s(Q,t)=S(Q,t)/S(Q) at the glass temperature Tg. The power law exponent is in excellent agreement with that calculated from dynamic and static critical exponents deduced from macroscopic susceptibility measurements made on a quite different time scale. This scaling relation involving exponents of different physical quantities determined by completely independent experimental methods is stringently verified experimentally in a spin glass. As spin glasses are a subgroup of the vast family of glassy systems also comprising structural glasses and other noncrystalline systems the observed strict critical scaling behavior is important. Above the phase transition the strikingly nonexponential relaxation, best fitted by the Ogielski (power-law times stretched exponential) function, appears as an intrinsic, homogeneous feature of spin glasses.

  13. Critical Multicultural Education Competencies Scale: A Scale Development Study

    ERIC Educational Resources Information Center

    Acar-Ciftci, Yasemin

    2016-01-01

    The purpose of this study is to develop a scale in order to identify the critical mutlicultural education competencies of teachers. For this reason, first of all, drawing on the knowledge in the literature, a new conceptual framework was created with deductive method based on critical theory, critical race theory and critical multicultural…

  14. Fish scale development: Hair today, teeth and scales yesterday?

    PubMed

    Sharpe, P T

    2001-09-18

    A group of genes in the tumour necrosis factor signalling pathway are mutated in humans and mice with ectodermal dysplasias--a failure of hair and tooth development. A mutation has now been identified in one of these genes, ectodysplasin-A receptor, in the teleost fish Medaka, that results in a failure of scale formation. PMID:11566120

  15. Bath County Computer Attitude Scale: A Reliability and Validity Scale.

    ERIC Educational Resources Information Center

    Moroz, Pauline A.; Nash, John B.

    The Bath County Computer Attitude Scale (BCCAS) has received limited attention concerning its reliability and validity with a U.S. adult population. As developed by G. G. Bear, H. C. Richards, and P. Lancaster in 1987, the instrument assessed attitudes toward computers in areas of computer use, computer-aided instruction, programming and technical…

  16. Inflation at the electroweak scale

    NASA Technical Reports Server (NTRS)

    Knox, Lloyd; Turner, Michael S.

    1993-01-01

    We present a model for slow-rollover inflation where the vacuum energy that drives inflation is of the order of G(F) exp -2; unlike most models, the conversion of vacuum energy to radiation ('reheating') is moderately efficient. The scalar field responsible for inflation is a standard-model singlet, develops a vacuum expectation value of 4 x 10 exp 6 GeV, has a mass of about 1 GeV, and can play a role in electroweak phenomena. We also discuss models where the energy scale of inflation is somewhat larger, but still well below the unification scale.

  17. Basalt Weathering Rates Across Scales

    NASA Astrophysics Data System (ADS)

    Navarresitchler, A.; Brantley, S.

    2006-12-01

    Weathering of silicate minerals is a known sink for atmospheric CO2. An estimated 30%-35% of the consumption of CO2 from continental silicate weathering can be attributed to basalt weathering (Dessert et al., 2003). To assess basalt weathering rates we examine weathering advance rates of basalt (w, mm/yr) reported at four scales: denudation rates from basalt watersheds (tens of kilometers), rates of soil formation from soil profiles developed on basaltic parent material of known age (meters), rates of weathering rind formation on basalt clasts (centimeters), and laboratory dissolution rates (millimeters). Basalt weathering advance rates calculated for watersheds range between 0.36 and 9.8x10-3 mm/yr. The weathering advance rate for a basalt soil profile in Hawaii is 8.0x10-3 mm/yr while advance rates for clasts range from 5.6x10-6 to 2.4x10-4 mm/yr. Batch and mixed flow laboratory experiments performed at circum- neutral pH yield advance rates of 2.5x10^{-5} to 3.4x10-7 mm/yr when normalized to BET surface area. These results show increasing advance rates with both increasing scale (from laboratory to watersheds) and increasing temperature. If we assume that basalt weathers at an intrinsic rate that applies to all scales then we conclude that variations in weathering advance rates arise from variations in surface area measurement at different scales (D); therefore, basalt weathering is a fractal system. We measure a fractal dimension (dr) of basalt weathering of 2.2. For Euclidean geometries, measured surface area does not vary with the scale at which it is measured and dr equals 2. For natural surfaces, surface area is related to the scale at which it is measured. As scale increases, the minimum size of the surface irregularities that are measurable also increases. The ratio between BET and geometric normalized laboratory dissolution rates has been defined as a roughness parameter, λ, which ranges from ~10-100. We extend the definition of this roughness parameter

  18. Dynamics of convective scale interaction

    NASA Technical Reports Server (NTRS)

    Purdom, James F. W.; Sinclair, Peter C.

    1988-01-01

    Several of the mesoscale dynamic and thermodynamic aspects of convective scale interaction are examined. An explanation of how sounding data can be coupled with satellite observed cumulus development in the warm sector and the arc cloud line's time evolution to develop a short range forecast of expected convective intensity along an arc cloud line. The formative, mature and dissipating stages of the arc cloud line life cycle are discussed. Specific properties of convective scale interaction are presented and the relationship between arc cloud lines and tornado producing thunderstorms is considered.

  19. Scale effects in crystal plasticity

    NASA Astrophysics Data System (ADS)

    Padubidri Janardhanachar, Guruprasad

    The goal of this research work is to further the understanding of crystal plasticity, particularly at reduced structural and material length scales. Fundamental understanding of plasticity is central to various challenges facing design and manufacturing of materials for structural and electronic device applications. The development of microstructurally tailored advanced metallic materials with enhanced mechanical properties that can withstand extremes in stress, strain, and temperature, will aid in increasing the efficiency of power generating systems by allowing them to work at higher temperatures and pressures. High specific strength materials can lead to low fuel consumption in transport vehicles. Experiments have shown that enhanced mechanical properties can be obtained in materials by constraining their size, microstructure (e.g. grain size), or both for various applications. For the successful design of these materials, it is necessary to have a thorough understanding of the influence of different length scales and evolving microstructure on the overall behavior. In this study, distinction is made between the effect of structural and material length scale on the mechanical behavior of materials. A length scale associated with an underlying physical mechanism influencing the mechanical behavior can overlap with either structural length scales or material length scales. If it overlaps with structural length scales, then the material is said to be dimensionally constrained. On the other hand, if it overlaps with material length scales, for example grain size, then the material is said to be microstructurally constrained. The objectives of this research work are: (1) to investigate scale and size effects due to dimensional constraints; (2) to investigate size effects due to microstructural constraints; and (3) to develop a size dependent hardening model through coarse graining of dislocation dynamics. A discrete dislocation dynamics (DDD) framework where the

  20. Modifiers and Perceived Stress Scale.

    ERIC Educational Resources Information Center

    Linn, Margaret W.

    1986-01-01

    The Modifiers and Perceived Stress Scale measures stressful life events by number and amount of perceived stresses and provides scores for variables such as anticipation of events, responsibility for events, and amount of social support from family and friends in coping with each event that modify the way stress is perceived. (Author)

  1. Nanotribology: Rubbing on Small Scale

    ERIC Educational Resources Information Center

    Dickinson, J. Thomas

    2005-01-01

    Nanometer-scale investigations offer the potential of providing first-principles understanding of tribo-systems in terms of fundamental intermolecular forces. Some of the basic issues and motivation for use of scanning probes in the area of nanotribology is presented.

  2. Hydrodynamic aspects of shark scales

    NASA Technical Reports Server (NTRS)

    Raschi, W. G.; Musick, J. A.

    1986-01-01

    Ridge morphometrices on placoid scales from 12 galeoid shark species were examined in order to evaluate their potential value for frictional drag reduction. The geometry of the shark scales is similar to longitudinal grooved surfaces (riblets) that have been previously shown to give 8 percent skin-friction reduction for turbulent boundary layers. The present study of the shark scales was undertaken to determine if the physical dimensions of the ridges on the shark scales are of the right magnitude to be used by the sharks for drag reduction based on previous riblet work. The results indicate that the ridge heights and spacings are normally maintained between the predicted optimal values proposed for voluntary and burst swimming speeds throughout the individual's ontogeny. Moreover, the species which might be considered to be the faster posses smaller and more closely spaced ridges that based on the riblet work would suggest a greater frictional drag reduction value at the high swimming speeds, as compared to their more sluggish counterparts.

  3. Measurement, Scale, and Theater Arts

    ERIC Educational Resources Information Center

    Johnston, David E.

    2004-01-01

    This article describes a middle-school project that challenged students to design scale models of three-dimensional blocks used in theater programs. Students applied skills such as measurement, proportionality, and spatial reasoning in a cooperative setting. (Contains 1 table and 9 figures.)

  4. Global scale predictability of floods

    NASA Astrophysics Data System (ADS)

    Weerts, Albrecht; Gijsbers, Peter; Sperna Weiland, Frederiek

    2016-04-01

    Flood (and storm surge) forecasting at the continental and global scale has only become possible in recent years (Emmerton et al., 2016; Verlaan et al., 2015) due to the availability of meteorological forecast, global scale precipitation products and global scale hydrologic and hydrodynamic models. Deltares has setup GLOFFIS a research-oriented multi model operational flood forecasting system based on Delft-FEWS in an open experimental ICT facility called Id-Lab. In GLOFFIS both the W3RA and PCRGLOB-WB model are run in ensemble mode using GEFS and ECMWF-EPS (latency 2 days). GLOFFIS will be used for experiments into predictability of floods (and droughts) and their dependency on initial state estimation, meteorological forcing and the hydrologic model used. Here we present initial results of verification of the ensemble flood forecasts derived with the GLOFFIS system. Emmerton, R., Stephens, L., Pappenberger, F., Pagano, T., Weerts, A., Wood, A. Salamon, P., Brown, J., Hjerdt, N., Donnelly, C., Cloke, H. Continental and Global Scale Flood Forecasting Systems, WIREs Water (accepted), 2016 Verlaan M, De Kleermaeker S, Buckman L. GLOSSIS: Global storm surge forecasting and information system 2015, Australasian Coasts & Ports Conference, 15-18 September 2015,Auckland, New Zealand.

  5. Secondary School Burnout Scale (SSBS)

    ERIC Educational Resources Information Center

    Aypay, Ayse

    2012-01-01

    The purpose of this study is to develop "Secondary School Burnout Scale." Study group included 728 students out of 14 schools in four cities in Turkey. Both Exploratory Factor Analysis and Confirmatory Factor Analysis were conducted on the data. A seven-factor solution emerged. The seven factors explained 61% of the total variance. The model…

  6. Primary Childhood School Success Scale.

    ERIC Educational Resources Information Center

    Seagraves, Margaret C.

    The purpose of this research study was to build and pilot a psychometric instrument, the Primary Childhood School Success Scale (PCSSS), to identify behaviors needed for children to be successful in first grade. Fifty-two teacher responses were collected. The instrument had a reliability coefficient (Alpha) of 0.95, a mean of 13.26, and a variance…

  7. Depression Rating Scale for Children.

    ERIC Educational Resources Information Center

    Poznanski, Elva O.; And Others

    1979-01-01

    A Children's Depression Rating Scale (CDRS) was devised and tested on 30 inpatient children (6 to 12 years old) in a medical hospital. A high correlation was found between global ratings by two psychiatrists of severity of depression and scores on the CDRS. Journal availability: American Academy of Pediatrics, P.O. Box 1034, Evanston, IL 60204.…

  8. Children's Social Relations Interview Scale.

    ERIC Educational Resources Information Center

    Volpe, Richard

    The Children's Social Relations Interview Scale (CSRIS) was developed to assess the role expectations and role behaviors associated with physical disabilities, namely low status and independence. Three traits are assessed: succorance, the seeking of help and support; restraint, physical and social limitation and circumscription by others; and…

  9. Multidimensional Scaling of Video Surrogates.

    ERIC Educational Resources Information Center

    Goodrum, Abby A.

    2001-01-01

    Four types of video surrogates were compared under two tasks. Multidimensional scaling was used to map dimensional dispersions of users' judgments of similarity between videos and surrogates. Congruence between these maps was used to evaluate representativeness of each surrogate type. Congruence was greater for image-based than for text-based…

  10. Scaling of pressurized fluidized beds

    SciTech Connect

    Guralnik, S.; Glicksman, L.R.

    1994-10-01

    The project has two primary objectives. The first is to verify a set of hydrodynamic scaling relationships for commercial pressurized fluidized bed combustors (PFBC). The second objective is to investigate solids mixing in pressurized bubbling fluidized beds. American Electric Power`s (AEP) Tidd combined-cycle demonstration plant will provide time-varying pressure drop data to serve as the basis for the scaling verification. The verification will involve demonstrating that a properly scaled cold model and the Tidd PFBC exhibit hydrodynamically similar behavior. An important issue in PFBC design is the spacing of fuel feed ports. The feed spacing is dictated by the fuel distribution and the mixing characteristics within the bed. After completing the scaling verification, the cold model will be used to study the characteristics of PFBCs. A thermal tracer technique will be utilized to study mixing both near the fuel feed region and in the far field. The results allow the coal feed and distributor to be designed for optimal heating.

  11. SCALING: Wind Tunnel to Flight

    NASA Astrophysics Data System (ADS)

    Bushnell, Dennis M.

    2006-01-01

    Wind tunnels have wide-ranging functionality, including many applications beyond aeronautics, and historically have been the major source of information for technological aerodynamics/aeronautical applications. There are a myriad of scaling issues/differences from flight to wind tunnel, and their study and impacts are uneven and a function of the particular type of extant flow phenomena. Typically, the most serious discrepancies are associated with flow separation. The tremendous ongoing increases in numerical simulation capability are changing and in many aspects have changed the function of the wind tunnel from a (scaled) "predictor" to a source of computational calibration/validation information with the computation then utilized as the flight prediction/scaling tool. Numerical simulations can increasingly include the influences of the various scaling issues. This wind tunnel role change has been occurring for decades as computational capability improves in all aspects. Additional issues driving this trend are the increasing cost (and time) disparity between physical experiments and computations, and increasingly stringent accuracy requirements.

  12. Scaling properties of marathon races

    NASA Astrophysics Data System (ADS)

    Alvarez-Ramirez, Jose; Rodriguez, Eduardo

    2006-06-01

    Some regularities in popular marathon races are identified in this paper. It is found for high-performance participants (i.e., racing times in the range [2:15,3:15] h), the average velocity as a function of the marathoner's ranking behaves as a power-law, which may be suggesting the presence of critical phenomena. Elite marathoners with racing times below 2:15 h can be considered as outliers with respect to this behavior. For the main marathon pack (i.e., racing times in the range [3:00,6:00] h), the average velocity as a function of the marathoner's ranking behaves linearly. For this racing times, the interpersonal velocity, defined as the difference of velocities between consecutive runners, displays a continuum of scaling behavior ranging from uncorrelated noise for small scales to correlated 1/f-noise for large scales. It is a matter of fact that 1/f-noise is characterized by correlations extended over a wide range of scales, a clear indication of some sort of cooperative effect.

  13. Multi-Scale Infrastructure Assessment

    EPA Science Inventory

    The U.S. Environmental Protection Agency’s (EPA) multi-scale infrastructure assessment project supports both water resource adaptation to climate change and the rehabilitation of the nation’s aging water infrastructure by providing tools, scientific data and information to progra...

  14. The Adaptive Behavior Rating Scale.

    ERIC Educational Resources Information Center

    Meyer, William J.

    A scale to identify important behaviors in preschool children was developed, and ratings were related to more traditional indices of development and academic readiness. Teacher interviews were used to identify 62 specific behaviors related to maximally adapted and maximally maladapted kindergarten children. These were incorporated into a…

  15. Scaling up of renewable chemicals.

    PubMed

    Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A

    2016-04-01

    The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully. PMID:26874264

  16. Hydrodynamic aspects of shark scales

    NASA Astrophysics Data System (ADS)

    Raschi, W. G.; Musick, J. A.

    1986-03-01

    Ridge morphometrices on placoid scales from 12 galeoid shark species were examined in order to evaluate their potential value for frictional drag reduction. The geometry of the shark scales is similar to longitudinal grooved surfaces (riblets) that have been previously shown to give 8 percent skin-friction reduction for turbulent boundary layers. The present study of the shark scales was undertaken to determine if the physical dimensions of the ridges on the shark scales are of the right magnitude to be used by the sharks for drag reduction based on previous riblet work. The results indicate that the ridge heights and spacings are normally maintained between the predicted optimal values proposed for voluntary and burst swimming speeds throughout the individual's ontogeny. Moreover, the species which might be considered to be the faster posses smaller and more closely spaced ridges that based on the riblet work would suggest a greater frictional drag reduction value at the high swimming speeds, as compared to their more sluggish counterparts.

  17. REGIONAL SCALE COMPARATIVE RISK ASSESSMENT

    EPA Science Inventory

    Regional Vulnerability Assessment (ReVA) is an approach to regional-scale ecological risk assessment that is currently under development by EPA's Office of Research and Development. The pilot assessment will be done for the mid-Atlantic region and builds on data collected for th...

  18. Structural Similitude and Scaling Laws

    NASA Technical Reports Server (NTRS)

    Simitses, George J.

    1998-01-01

    Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in

  19. Time scales in cognitive neuroscience

    PubMed Central

    Papo, David

    2013-01-01

    Cognitive neuroscience boils down to describing the ways in which cognitive function results from brain activity. In turn, brain activity shows complex fluctuations, with structure at many spatio-temporal scales. Exactly how cognitive function inherits the physical dimensions of neural activity, though, is highly non-trivial, and so are generally the corresponding dimensions of cognitive phenomena. As for any physical phenomenon, when studying cognitive function, the first conceptual step should be that of establishing its dimensions. Here, we provide a systematic presentation of the temporal aspects of task-related brain activity, from the smallest scale of the brain imaging technique's resolution, to the observation time of a given experiment, through the characteristic time scales of the process under study. We first review some standard assumptions on the temporal scales of cognitive function. In spite of their general use, these assumptions hold true to a high degree of approximation for many cognitive (viz. fast perceptual) processes, but have their limitations for other ones (e.g., thinking or reasoning). We define in a rigorous way the temporal quantifiers of cognition at all scales, and illustrate how they qualitatively vary as a function of the properties of the cognitive process under study. We propose that each phenomenon should be approached with its own set of theoretical, methodological and analytical tools. In particular, we show that when treating cognitive processes such as thinking or reasoning, complex properties of ongoing brain activity, which can be drastically simplified when considering fast (e.g., perceptual) processes, start playing a major role, and not only characterize the temporal properties of task-related brain activity, but also determine the conditions for proper observation of the phenomena. Finally, some implications on the design of experiments, data analyses, and the choice of recording parameters are discussed. PMID:23626578

  20. Infants' and adults' perception of scale structure.

    PubMed

    Trehub, S E; Schellenberg, E G; Kamenetsky, S B

    1999-08-01

    Adults and 9-month-old infants were required to detect mistuned tones in multitone sequences. When 7-tone versions of a common nursery tune were generated from the Western major scale (unequal scale steps) or from an alternative scale (equal steps), infants detected the mistuned tones more accurately in the unequal-step context than in the equal-step context (Experiment 1). Infants and adults were subsequently tested with 1 of 3 ascending-descending scales (15 tones): (a) a potentially familiar scale (major) with unequal steps, (b) an unfamiliar scale with unequal steps, and (c) an unfamiliar scale with equal steps. Infants detected mistuned tones only in the scales with unequal steps (Experiment 2). Adults performed better on the familiar (major) unequal-step scale and equally poorly on both unfamiliar scales (Experiments 3 and 4). These findings are indicative of an inherent processing bias favoring unequal-step scales. PMID:10464941

  1. Evaluating the impact of farm scale innovation at catchment scale

    NASA Astrophysics Data System (ADS)

    van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik

    2014-05-01

    Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear

  2. 21 CFR 880.2720 - Patient scale.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used to measure the weight of a patient who cannot stand on a scale. This generic device includes...

  3. 21 CFR 880.2720 - Patient scale.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used to measure the weight of a patient who cannot stand on a scale. This generic device includes...

  4. 21 CFR 880.2720 - Patient scale.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used to measure the weight of a patient who cannot stand on a scale. This generic device includes...

  5. 27 CFR 19.276 - Package scales.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Package scales. 19.276... Package scales. Proprietors shall ensure the accuracy of scales used for weighing packages of spirits through tests conducted at intervals of not more than 6 months or whenever scales are adjusted or...

  6. 21 CFR 880.2720 - Patient scale.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used to measure the weight of a patient who cannot stand on a scale. This generic device includes...

  7. 21 CFR 880.2720 - Patient scale.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used to measure the weight of a patient who cannot stand on a scale. This generic device includes...

  8. Mokken Scale Analysis Using Hierarchical Clustering Procedures

    ERIC Educational Resources Information Center

    van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T.; van der Ark, L. Andries

    2004-01-01

    Mokken scale analysis (MSA) can be used to assess and build unidimensional scales from an item pool that is sensitive to multiple dimensions. These scales satisfy a set of scaling conditions, one of which follows from the model of monotone homogeneity. An important drawback of the MSA program is that the sequential item selection and scale…

  9. 30 CFR 56.3202 - Scaling tools.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Scaling tools. 56.3202 Section 56.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... § 56.3202 Scaling tools. Where manual scaling is performed, a scaling bar shall be provided. This...

  10. 30 CFR 57.3202 - Scaling tools.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Scaling tools. 57.3202 Section 57.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... Support-Surface and Underground § 57.3202 Scaling tools. Where manual scaling is performed, a scaling...

  11. 30 CFR 56.3202 - Scaling tools.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Scaling tools. 56.3202 Section 56.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... § 56.3202 Scaling tools. Where manual scaling is performed, a scaling bar shall be provided. This...

  12. 30 CFR 56.3202 - Scaling tools.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Scaling tools. 56.3202 Section 56.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... § 56.3202 Scaling tools. Where manual scaling is performed, a scaling bar shall be provided. This...

  13. 30 CFR 57.3202 - Scaling tools.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Scaling tools. 57.3202 Section 57.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... Support-Surface and Underground § 57.3202 Scaling tools. Where manual scaling is performed, a scaling...

  14. 30 CFR 56.3202 - Scaling tools.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Scaling tools. 56.3202 Section 56.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... § 56.3202 Scaling tools. Where manual scaling is performed, a scaling bar shall be provided. This...

  15. 30 CFR 56.3202 - Scaling tools.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Scaling tools. 56.3202 Section 56.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... § 56.3202 Scaling tools. Where manual scaling is performed, a scaling bar shall be provided. This...

  16. 30 CFR 57.3202 - Scaling tools.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Scaling tools. 57.3202 Section 57.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... Support-Surface and Underground § 57.3202 Scaling tools. Where manual scaling is performed, a scaling...

  17. 30 CFR 57.3202 - Scaling tools.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Scaling tools. 57.3202 Section 57.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... Support-Surface and Underground § 57.3202 Scaling tools. Where manual scaling is performed, a scaling...

  18. 30 CFR 57.3202 - Scaling tools.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Scaling tools. 57.3202 Section 57.3202 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... Support-Surface and Underground § 57.3202 Scaling tools. Where manual scaling is performed, a scaling...

  19. Stability of Rasch Scales over Time

    ERIC Educational Resources Information Center

    Taylor, Catherine S.; Lee, Yoonsun

    2010-01-01

    Item response theory (IRT) methods are generally used to create score scales for large-scale tests. Research has shown that IRT scales are stable across groups and over time. Most studies have focused on items that are dichotomously scored. Now Rasch and other IRT models are used to create scales for tests that include polytomously scored items.…

  20. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  1. Scaling properties of urban facilities

    NASA Astrophysics Data System (ADS)

    Wu, Liang; Yan, Xin; Du, Jiang

    2014-12-01

    Two measurements are employed to quantitatively investigate the scaling properties of the spatial distribution of urban facilities: the K function [whose derivative gives the radial distribution function ρ (t ) =K'(t ) /2 π t ] by number counting and the variance-mean relationship by the method of expanding bins. The K function and the variance-mean relationship are both power functions. This means that the spatial distributions of urban facilities are scaling invariant. Further analysis of more data (which includes eight types of facilities in 37 major Chinese cities) shows that the the power laws broadly hold for all combinations of facilities and cities. A double stochastic process (DSP) model is proposed as a mathematical mechanism by which spatial point patterns can be generated that resemble the actual distribution of urban facilities both qualitatively and quantitatively. Simulation of the DSP yields a better agreement with the urban data than the correlated percolation model.

  2. Scaling properties of urban facilities.

    PubMed

    Wu, Liang; Yan, Xin; Du, Jiang

    2014-12-01

    Two measurements are employed to quantitatively investigate the scaling properties of the spatial distribution of urban facilities: the K function [whose derivative gives the radial distribution function ρ(t)=K'(t)/2πt] by number counting and the variance-mean relationship by the method of expanding bins. The K function and the variance-mean relationship are both power functions. This means that the spatial distributions of urban facilities are scaling invariant. Further analysis of more data (which includes eight types of facilities in 37 major Chinese cities) shows that the the power laws broadly hold for all combinations of facilities and cities. A double stochastic process (DSP) model is proposed as a mathematical mechanism by which spatial point patterns can be generated that resemble the actual distribution of urban facilities both qualitatively and quantitatively. Simulation of the DSP yields a better agreement with the urban data than the correlated percolation model. PMID:25615149

  3. The scale of cosmic isotropy

    SciTech Connect

    Marinoni, C.; Bel, J.; Buzzi, A. E-mail: Julien.Bel@cpt.univ-mrs.fr

    2012-10-01

    The most fundamental premise to the standard model of the universe states that the large-scale properties of the universe are the same in all directions and at all comoving positions. Demonstrating this hypothesis has proven to be a formidable challenge. The cross-over scale R{sub iso} above which the galaxy distribution becomes statistically isotropic is vaguely defined and poorly (if not at all) quantified. Here we report on a formalism that allows us to provide an unambiguous operational definition and an estimate of R{sub iso}. We apply the method to galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7, finding that R{sub iso} ∼ 150h{sup −1}Mpc. Besides providing a consistency test of the Copernican principle, this result is in agreement with predictions based on numerical simulations of the spatial distribution of galaxies in cold dark matter dominated cosmological models.

  4. Scale invariance in road networks

    NASA Astrophysics Data System (ADS)

    Kalapala, Vamsi; Sanwalani, Vishal; Clauset, Aaron; Moore, Cristopher

    2006-02-01

    We study the topological and geographic structure of the national road networks of the United States, England, and Denmark. By transforming these networks into their dual representation, where roads are vertices and an edge connects two vertices if the corresponding roads ever intersect, we show that they exhibit both topological and geographic scale invariance. That is, we show that for sufficiently large geographic areas, the dual degree distribution follows a power law with exponent 2.2⩽α⩽2.4 , and that journeys, regardless of their length, have a largely identical structure. To explain these properties, we introduce and analyze a simple fractal model of road placement that reproduces the observed structure, and suggests a testable connection between the scaling exponent α and the fractal dimensions governing the placement of roads and intersections.

  5. Unified scaling law for earthquakes

    PubMed Central

    Christensen, Kim; Danon, Leon; Scanlon, Tim; Bak, Per

    2002-01-01

    We propose and verify a unified scaling law that provides a framework for viewing the probability of the occurrence of earthquakes in a given region and for a given cutoff magnitude. The law shows that earthquakes occur in hierarchical correlated clusters, which overlap with other spatially separated correlated clusters for large enough time periods and areas. For a small enough region and time-scale, only a single correlated group can be sampled. The law links together the Gutenberg–Richter Law, the Omori Law of aftershocks, and the fractal dimensions of the faults. The Omori Law is shown to be the short time limit of general hierarchical phenomenon containing the statistics of both “main shocks” and “aftershocks,” indicating that they are created by the same mechanism. PMID:11875203

  6. Emerging universe from scale invariance

    SciTech Connect

    Del Campo, Sergio; Herrera, Ramón; Guendelman, Eduardo I.; Labraña, Pedro E-mail: guendel@bgu.ac.il E-mail: plabrana@ubiobio.cl

    2010-06-01

    We consider a scale invariant model which includes a R{sup 2} term in action and show that a stable ''emerging universe'' scenario is possible. The model belongs to the general class of theories, where an integration measure independent of the metric is introduced. To implement scale invariance (S.I.), a dilaton field is introduced. The integration of the equations of motion associated with the new measure gives rise to the spontaneous symmetry breaking (S.S.B) of S.I. After S.S.B. of S.I. in the model with the R{sup 2} term (and first order formalism applied), it is found that a non trivial potential for the dilaton is generated. The dynamics of the scalar field becomes non linear and these non linearities are instrumental in the stability of some of the emerging universe solutions, which exists for a parameter range of the theory.

  7. Scaling behavior of fragment shapes.

    PubMed

    Kun, F; Wittel, F K; Herrmann, H J; Kröplin, B H; Måløy, K J

    2006-01-20

    We present an experimental and theoretical study of the shape of fragments generated by explosive and impact loading of closed shells. Based on high speed imaging, we have determined the fragmentation mechanism of shells. Experiments have shown that the fragments vary from completely isotropic to highly anisotropic elongated shapes, depending on the microscopic cracking mechanism of the shell. Anisotropic fragments proved to have a self-affine character described by a scaling exponent. The distribution of fragment shapes exhibits a power-law decay. The robustness of the scaling laws is illustrated by a stochastic hierarchical model of fragmentation. Our results provide a possible improvement of the representation of fragment shapes in models of space debris. PMID:16486594

  8. Scaling Characteristics of Rill Networks

    NASA Astrophysics Data System (ADS)

    Barry, D. A.; Cheraghi, M.; Jomaa, S.; Sander, G. C.

    2015-12-01

    Sediment transport in overland flow interacts dynamically with the soil surface morphology. Often, sediment transport models assume a simplified and static morphology. This assumption, although it limits the predictive capacity of models, is reasonable since the evolution of morphology is difficult to quantify, particularly when rill networks form. Such networks evolve due to local features of the surface, which are difficult to identify even in well controlled laboratory experiments. Instead of attempting to predict details of rill networks, we hypothesize that their statistical properties can (i) be measured reliably and (ii) that under reasonable background conditions they exhibit scale invariance in space. We report initial results of laboratory experiments to test these hypotheses. An agricultural soil was placed in a 5 m × 2 m flume with a 5% slope to which a uniform rainfall was applied. Prior to the rainfall, the top 10 cm of the soil was ploughed and smoothed. Rill networks are generated in three 3-h experiments using different precipitation rates of 30, 45 and 60 mm h-1. The surface morphology was measured using a laser scanning every 30 min (rainfall was halted to permit scanning). For the measured Digital Elevation Models, the exceedance probabilities and corresponding scaling factors for the drainage area, upstream length of the network were calculated. The results showed that, similar to river networks, there is a power law relation in the exceedance probabilities for the parts of the network in which rill erosion is dominant. However, contrary to large scale river networks, the scaling exponents were found to be dependent on rainfall intensity.

  9. Small-Scale-Field Dynamo

    SciTech Connect

    Gruzinov, A.; Cowley, S.; Sudan, R. ||

    1996-11-01

    Generation of magnetic field energy, without mean field generation, is studied. Isotropic mirror-symmetric turbulence of a conducting fluid amplifies the energy of small-scale magnetic perturbations if the magnetic Reynolds number is high, and the dimensionality of space {ital d} satisfies 2.103{lt}{ital d}{lt}8.765. The result does not depend on the model of turbulence, incompressibility, and isotropy being the only requirements. {copyright} {ital 1996 The American Physical Society.}

  10. Global scale groundwater flow model

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  11. Recent developments in complex scaling

    SciTech Connect

    Rescigno, T.N.

    1980-12-15

    Some recent developments in the use of complex basis function techniques to study resonance as well as certain types of non-resonant, scattering phenomena are discussed. Complex scaling techniques and other closely related methods have continued to attract the attention of computational physicists and chemists and have now reached a point of development where meaningful calculations on many-electron atoms and molecules are beginning to appear feasible.

  12. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  13. Latest Developments in SLD Scaling

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Anderson, David N.

    2006-01-01

    Scaling methods have been shown previously to work well for super cooled large droplet (SLD) main ice shapes. However, feather sizes for some conditions have not been well represented by scale tests. To determine if there are fundamental differences between the development of feathers for appendix C and SLD conditions, this study used time-sequenced photographs, viewing along the span of the model during icing sprays. An airspeed of 100 kt, cloud water drop MVDs of 30 and 140 microns, and stagnation freezing fractions of 0.30 and 0.50 were tested in the NASA Glenn Icing Research Tunnel using an unswept 91-cm-chord NACA0012 airfoil model mounted at 0deg AOA. The photos indicated that the feathers that developed in a distinct region downstream of the leading-edge ice determined the horn location and angle. The angle at which feathers grew from the surface were also measured; results are shown for an airspeed of 150 kt, an MVD of 30 microns, and stagnation freezing fractions of 0.30 to 0.60. Feather angles were found to depend strongly on the stagnation freezing fraction, and were independent of either chordwise position on the model or time into the spray. Feather angles also correlated well with horn angles. For these tests, there did not appear to be fundamental differences between the physics of SLD and appendix C icing; therefore, for these conditions similarity parameters used for appendix C scaling appear to be valid for SLD scaling as well. Further investigation into the cause for the large feather structures observed for some SLD conditions will continue.

  14. Scaling Exponents in Financial Markets

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Kim, Cheol-Hyun; Kim, Soo Yong

    2007-03-01

    We study the dynamical behavior of four exchange rates in foreign exchange markets. A detrended fluctuation analysis (DFA) is applied to detect the long-range correlation embedded in the non-stationary time series. It is for our case found that there exists a persistent long-range correlation in volatilities, which implies the deviation from the efficient market hypothesis. Particularly, the crossover is shown to exist in the scaling behaviors of the volatilities.

  15. Allometric scaling laws of metabolism

    NASA Astrophysics Data System (ADS)

    da Silva, Jafferson Kamphorst Leal; Garcia, Guilherme J. M.; Barbosa, Lauro A.

    2006-12-01

    One of the most pervasive laws in biology is the allometric scaling, whereby a biological variable Y is related to the mass M of the organism by a power law, Y=YM, where b is the so-called allometric exponent. The origin of these power laws is still a matter of dispute mainly because biological laws, in general, do not follow from physical ones in a simple manner. In this work, we review the interspecific allometry of metabolic rates, where recent progress in the understanding of the interplay between geometrical, physical and biological constraints has been achieved. For many years, it was a universal belief that the basal metabolic rate (BMR) of all organisms is described by Kleiber's law (allometric exponent b=3/4). A few years ago, a theoretical basis for this law was proposed, based on a resource distribution network common to all organisms. Nevertheless, the 3/4-law has been questioned recently. First, there is an ongoing debate as to whether the empirical value of b is 3/4 or 2/3, or even nonuniversal. Second, some mathematical and conceptual errors were found these network models, weakening the proposed theoretical arguments. Another pertinent observation is that the maximal aerobically sustained metabolic rate of endotherms scales with an exponent larger than that of BMR. Here we present a critical discussion of the theoretical models proposed to explain the scaling of metabolic rates, and compare the predicted exponents with a review of the experimental literature. Our main conclusion is that although there is not a universal exponent, it should be possible to develop a unified theory for the common origin of the allometric scaling laws of metabolism.

  16. Development of a Facebook Addiction Scale.

    PubMed

    Andreassen, Cecilie Schou; Torsheim, Torbjørn; Brunborg, Geir Scott; Pallesen, Ståle

    2012-04-01

    The Bergen Facebook Addiction Scale (BFAS), initially a pool of 18 items, three reflecting each of the six core elements of addiction (salience, mood modification, tolerance, withdrawal, conflict, and relapse), was constructed and administered to 423 students together with several other standardized self-report scales (Addictive Tendencies Scale, Online Sociability Scale, Facebook Attitude Scale, NEO-FFI, BIS/BAS scales, and Sleep questions). That item within each of the six addiction elements with the highest corrected item-total correlation was retained in the final scale. The factor structure of the scale was good (RMSEA = .046, CFI = .99) and coefficient alpha was .83. The 3-week test-retest reliability coefficient was .82. The scores converged with scores for other scales of Facebook activity. Also, they were positively related to Neuroticism and Extraversion, and negatively related to Conscientiousness. High scores on the new scale were associated with delayed bedtimes and rising times. PMID:22662404

  17. An investigation of ride quality rating scales

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Coates, G. D.; Leatherwood, J. D.

    1977-01-01

    An experimental investigation was conducted for the combined purposes of determining the relative merits of various category scales for the prediction of human discomfort response to vibration and for determining the mathematical relationships whereby subjective data are transformed from one scale to other scales. There were 16 category scales analyzed representing various parametric combinations of polarity, that is, unipolar and bipolar, scale type, and number of scalar points. Results indicated that unipolar continuous-type scales containing either seven or nine scalar points provide the greatest reliability and discriminability. Transformations of subjective data between category scales were found to be feasible with unipolar scales of a larger number of scalar points providing the greatest accuracy of transformation. The results contain coefficients for transformation of subjective data between the category scales investigated. A result of particular interest was that the comfort half of a bipolar scale was seldom used by subjects to describe their subjective reaction to vibration.

  18. Non-relativistic scale anomalies

    NASA Astrophysics Data System (ADS)

    Arav, Igal; Chapman, Shira; Oz, Yaron

    2016-06-01

    We extend the cohomological analysis in arXiv:1410.5831 of anisotropic Lifshitz scale anomalies. We consider non-relativistic theories with a dynamical critical exponent z = 2 with or without non-relativistic boosts and a particle number symmetry. We distinguish between cases depending on whether the time direction does or does not induce a foliation structure. We analyse both 1 + 1 and 2 + 1 spacetime dimensions. In 1 + 1 dimensions we find no scale anomalies with Galilean boost symmetries. The anomalies in 2 + 1 dimensions with Galilean boosts and a foliation structure are all B-type and are identical to the Lifshitz case in the purely spatial sector. With Galilean boosts and without a foliation structure we find also an A-type scale anomaly. There is an infinite ladder of B-type anomalies in the absence of a foliation structure with or without Galilean boosts. We discuss the relation between the existence of a foliation structure and the causality of the field theory.

  19. Temporal scaling in information propagation

    NASA Astrophysics Data System (ADS)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  20. A Lab-Scale CELSS

    NASA Technical Reports Server (NTRS)

    Flynn, Mark E.; Finn, Cory K.; Srinivasan, Venkatesh; Sun, Sidney; Harper, Lynn D. (Technical Monitor)

    1994-01-01

    It has been shown that prohibitive resupply costs for extended-duration manned space flight missions will demand that a high degree of recycling and in situ food production be implemented. A prime candidate for in situ food production is the growth of higher level plants. Research in the area of plant physiology is currently underway at many institutions. This research is aimed at the characterization and optimization of gas exchange, transpiration and food production of higher plants in order to support human life in space. However, there are a number of unresolved issues involved in making plant chambers an integral part of a closed life support system. For example, issues pertaining to the integration of tightly coupled, non-linear systems with small buffer volumes will need to be better understood in order to ensure successful long term operation of a Controlled Ecological Life Support System (CELSS). The Advanced Life Support Division at NASA Ames Research Center has embarked on a program to explore some of these issues and demonstrate the feasibility of the CELSS concept. The primary goal of the Laboratory Scale CELSS Project is to develop a fully-functioning integrated CELSS on a laboratory scale in order to provide insight, knowledge and experience applicable to the design of human-rated CELSS facilities. Phase I of this program involves the integration of a plant chamber with a solid waste processor. This paper will describe the requirements, design and some experimental results from Phase I of the Laboratory Scale CELSS Program.

  1. Flavor from the electroweak scale

    SciTech Connect

    Bauer, Martin; Carena, Marcela; Gemmler, Katrin

    2015-11-04

    We discuss the possibility that flavor hierarchies arise from the electroweak scale in a two Higgs doublet model, in which the two Higgs doublets jointly act as the flavon. Quark masses and mixing angles are explained by effective Yukawa couplings, generated by higher dimensional operators involving quarks and Higgs doublets. Modified Higgs couplings yield important effects on the production cross sections and decay rates of the light Standard Model like Higgs. In addition, flavor changing neutral currents arise at tree-level and lead to strong constraints from meson-antimeson mixing. Remarkably, flavor constraints turn out to prefer a region in parameter space that is in excellent agreement with the one preferred by recent Higgs precision measurements at the Large Hadron Collider (LHC). Direct searches for extra scalars at the LHC lead to further constraints. Precise predictions for the production and decay modes of the additional Higgs bosons are derived, and we present benchmark scenarios for searches at the LHC Run II. As a result, flavor breaking at the electroweak scale as well as strong coupling effects demand a UV completion at the scale of a few TeV, possibly within the reach of the LHC.

  2. Flavor from the electroweak scale

    DOE PAGESBeta

    Bauer, Martin; Carena, Marcela; Gemmler, Katrin

    2015-11-04

    We discuss the possibility that flavor hierarchies arise from the electroweak scale in a two Higgs doublet model, in which the two Higgs doublets jointly act as the flavon. Quark masses and mixing angles are explained by effective Yukawa couplings, generated by higher dimensional operators involving quarks and Higgs doublets. Modified Higgs couplings yield important effects on the production cross sections and decay rates of the light Standard Model like Higgs. In addition, flavor changing neutral currents arise at tree-level and lead to strong constraints from meson-antimeson mixing. Remarkably, flavor constraints turn out to prefer a region in parameter spacemore » that is in excellent agreement with the one preferred by recent Higgs precision measurements at the Large Hadron Collider (LHC). Direct searches for extra scalars at the LHC lead to further constraints. Precise predictions for the production and decay modes of the additional Higgs bosons are derived, and we present benchmark scenarios for searches at the LHC Run II. As a result, flavor breaking at the electroweak scale as well as strong coupling effects demand a UV completion at the scale of a few TeV, possibly within the reach of the LHC.« less

  3. The scaling of secondary craters

    NASA Technical Reports Server (NTRS)

    Croft, Steven K.

    1991-01-01

    Secondary craters are common features around fresh planetary-scale primary impact craters throughout most of the Solar System. They derive from the ejection phase of crater formation, thus secondary scaling relations provide constraints on parameters affecting ejection processes. Secondary crater fields typically begin at the edge of the continuous ejecta blankets (CEB) and extend out several crater radii. Secondaries tend to have rounded rims and bilateral symmetry about an axis through the primary crater's center. Prominent secondary chains can extend inward across the CEB close to the rim. A simple method for comparing secondary crater fields was employed: averaging the diameters and ranges from the center of the primary crater of the five largest craters in a secondary crater field. While not as much information is obtained about individual crater fields by this method as in more complete secondary field mapping, it facilitates rapid comparison of many secondary fields. Also, by quantifying a few specific aspects of the secondary crater field, this method can be used to construct scaling relations for secondary craters.

  4. Scaling effects in theropod dinosaurs

    NASA Astrophysics Data System (ADS)

    Lee, Scott A.

    2014-03-01

    For geometrically similar animals, the length of the leg bones l would scale as the diameter of the leg bone d: d ~ l. In order to maintain the same stresses in the leg bones when standing (i.e., elastic similarity), l3 must scale as d2, yielding d ~ l 3 / 2. Sixty-six femora from more than 30 different species of theropod dinosaurs were studied. Our results yield d ~ l 1 . 16, well below the prediction of elastic similarity. The maximum stresses on the leg bones would have occurred during locomotion when forces on the order of several times the body weight would have been present. Bending and torsional stresses of the femur would have been more likely to break the bone than compression. The ability of the bone to resist bending stresses is given by its section modulus Z. From our data, we find that Z ~ l 3 . 49. The bending torque applied to the femur is expected to scale as roughly l4. Both results indicate that larger theropods had smaller cursorial abilities than smaller theropods, as is observed in extant animals. Assuming that all theropod bones have the same shear modulus, the ability for the femora to resist torsion is given by Q = J/ l where J is the second polar moment of the area. From our data, we find that Q ~ l 3 . 66.

  5. Mechanically reliable scales and coatings

    SciTech Connect

    Tortorelli, P.F.; Alexander, K.B.

    1995-06-01

    In many high-temperature fossil energy systems, corrosion and deleterious environmental effects arising from reactions with reactive gases and condensible products often compromise materials performance and, as a consequence, degrade operating efficiencies. Protection of materials from such reactions is best afforded by the formation of stable surface oxides (either as deposited coatings or thermally grown scales) that are slowly reacting, continuous, dense, and adherent to the substrate. However, the ability of normally brittle ceramic films and coatings to provide such protection has long been problematical, particularly for applications involving numerous or severe high-temperature thermal cycles or very aggressive (for example, sulfidizing) environments. A satisfactory understanding of how scale and coating integrity and adherence are improved by compositional, microstructural, and processing modifications is lacking. Therefore, to address this issue, the present work is intended to define the relationships between substrate characteristics (composition, microstructure, and mechanical behavior) and the structure and protective properties of deposited oxide coatings and/or thermally grown scales. Such information is crucial to the optimization of the chemical, interfacial, and mechanical properties of the protective oxides on high-temperature materials through control of processing and composition and directly supports the development of corrosion-resistant, high-temperature materials for improved energy and environmental control systems.

  6. Development of emotional stability scale

    PubMed Central

    Chaturvedi, M.; Chander, R.

    2010-01-01

    Background: Emotional stability remains the central theme in personality studies. The concept of stable emotional behavior at any level is that which reflects the fruits of normal emotional development. The study aims at development of an emotional stability scale. Materials and Methods: Based on available literature the components of emotional stability were identified and 250 items were developed, covering each component. Two-stage elimination of items was carried out, i.e. through judges’ opinions and item analysis. Results: Fifty items with highest ‘t’ values covering 5 dimensions of emotional stability viz pessimism vs. optimism, anxiety vs. calm, aggression vs. tolerance., dependence vs. autonomy., apathy vs. empathy were retained in the final scale. Reliability as checked by Cronbach's alpha was .81 and by split half method it was .79. Content validity and construct validity were checked. Norms are given in the form of cumulative percentages. Conclusion: Based on the psychometric principles a 50 item, self-administered 5 point Lickert type rating scale was developed for measurement of emotional stability. PMID:21694789

  7. Temporal scaling in information propagation.

    PubMed

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-01-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers. PMID:24939414

  8. Low on the London Scale

    NASA Astrophysics Data System (ADS)

    Webb, S.

    2013-09-01

    Until relatively recently, many authors have assumed that if extraterrestrial life is discovered it will be via the discovery of extraterrestrial intelligence: we can best try to detect life by adopting the SETI approach of trying to detect beacons or artefacts. The Rio Scale, proposed by Almár and Tarter in 2000, is a tool for quantifying the potential significance for society of any such reported detection. However, improvements in technology and advances in astrobiology raise the possibility that the discovery of extraterrestrial life will instead be via the detection of atmospheric biosignatures. The London Scale, proposed by Almár in 2010, attempts to quantify the potential significance of the discovery of extraterrestrial life rather than extraterrestrial intelligence. What might be the consequences of the announcement of a discovery that ranks low on the London Scale? In other words, what might be society's reaction if 'first contact' is via the remote sensing of the byproducts of unicellular organisms rather than with the products of high intelligence? Here, I examine some possible reactions to that question; in particular, I discuss how such an announcement might affect our views of life here on Earth and of humanity's place in the universe.

  9. Delirium in the geriatric unit: proton-pump inhibitors and other risk factors

    PubMed Central

    Otremba, Iwona; Wilczyński, Krzysztof; Szewieczek, Jan

    2016-01-01

    Background Delirium remains a major nosocomial complication of hospitalized elderly. Predictive models for delirium may be useful for identification of high-risk patients for implementation of preventive strategies. Objective Evaluate specific factors for development of delirium in a geriatric ward setting. Methods Prospective cross-sectional study comprised 675 consecutive patients aged 79.2±7.7 years (66% women and 34% men), admitted to the subacute geriatric ward of a multiprofile university hospital after exclusion of 113 patients treated with antipsychotic medication because of behavioral disorders before admission. Comprehensive geriatric assessments including a structured interview, physical examination, geriatric functional assessment, blood sampling, ECG, abdominal ultrasound, chest X-ray, Confusion Assessment Method for diagnosis of delirium, Delirium-O-Meter to assess delirium severity, Richmond Agitation-Sedation Scale to assess sedation or agitation, visual analog scale and Doloplus-2 scale to assess pain level were performed. Results Multivariate logistic regression analysis revealed five independent factors associated with development of delirium in geriatric inpatients: transfer between hospital wards (odds ratio [OR] =2.78; confidence interval [CI] =1.54–5.01; P=0.001), preexisting dementia (OR =2.29; CI =1.44–3.65; P<0.001), previous delirium incidents (OR =2.23; CI =1.47–3.38; P<0.001), previous fall incidents (OR =1.76; CI =1.17–2.64; P=0.006), and use of proton-pump inhibitors (OR =1.67; CI =1.11–2.53; P=0.014). Conclusion Transfer between hospital wards, preexisting dementia, previous delirium incidents, previous fall incidents, and use of proton-pump inhibitors are predictive of development of delirium in the geriatric inpatient setting. PMID:27103793

  10. Coupled length scales in eroding landscapes

    SciTech Connect

    Chan, Kelvin K.; Rothman, Daniel H.

    2001-05-01

    We report results from an empirical study of the anisotropic structure of eroding landscapes. By constructing a novel correlation function, we show quantitatively that small-scale channel-like features of landscapes are coupled to the large-scale structure of drainage basins. We show additionally that this two-scale interaction is scale-dependent. The latter observation suggests that a commonly applied effective equation for erosive transport may itself depend on scale.

  11. Dystonia rating scales: critique and recommendations

    PubMed Central

    Albanese, Alberto; Sorbo, Francesca Del; Comella, Cynthia; Jinnah, H.A.; Mink, Jonathan W.; Post, Bart; Vidailhet, Marie; Volkmann, Jens; Warner, Thomas T.; Leentjens, Albert F.G.; Martinez-Martin, Pablo; Stebbins, Glenn T.; Goetz, Christopher G.; Schrag, Anette

    2014-01-01

    Background Many rating scales have been applied to the evaluation of dystonia, but only few have been assessed for clinimetric properties. The Movement Disorders Society commissioned this task force to critique existing dystonia rating scales and place them in the clinical and clinimetric context. Methods A systematic literature review was conducted to identify rating scales that have either been validated or used in dystonia. Results Thirty six potential scales were identified. Eight were excluded because they did not meet review criteria, leaving twenty-eight scales that were critiqued and rated by the task force. Seven scales were found to meet criteria to be “recommended”: the Blepharospasm Disability Index is recommended for rating blepharospasm; the Cervical Dystonia Impact Scale and the Toronto Western Spasmodic Torticollis Rating Scale for rating cervical dystonia; the Craniocervical Dystonia Questionnaire for blepharospasm and cervical dystonia; the Voice Handicap Index (VHI) and the Vocal Performance Questionnaire (VPQ) for laryngeal dystonia; and the Fahn-Marsden Dystonia Rating Scale for rating generalized dystonia. Two “recommended” scales (VHI and VPQ) are generic scales validated on few patients with laryngeal dystonia, whereas the others are disease-specific scales. Twelve scales met criteria for “suggested” and seven scales met criteria for “listed”. All the scales are individually reviewed in the online appendix. Conclusion The task force recommends five specific dystonia scales and suggests to further validate in dystonia two recommended generic voice-disorder scales. Existing scales for oromandibular, arm and task-specific dystonia should be refined and fully assessed. Scales should be developed for body regions where no scales are available, such as lower limbs and trunk. PMID:23893443

  12. Evaluating the impact of farm scale innovation at catchment scale

    NASA Astrophysics Data System (ADS)

    van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik

    2014-05-01

    Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear

  13. Preliminary Scaling Estimate for Select Small Scale Mixing Demonstration Tests

    SciTech Connect

    Wells, Beric E.; Fort, James A.; Gauglitz, Phillip A.; Rector, David R.; Schonewill, Philip P.

    2013-09-12

    The Hanford Site double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions’ Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems.

  14. Optimal Scaling of Digital Transcriptomes

    PubMed Central

    Glusman, Gustavo; Caballero, Juan; Robinson, Max; Kutlu, Burak; Hood, Leroy

    2013-01-01

    Deep sequencing of transcriptomes has become an indispensable tool for biology, enabling expression levels for thousands of genes to be compared across multiple samples. Since transcript counts scale with sequencing depth, counts from different samples must be normalized to a common scale prior to comparison. We analyzed fifteen existing and novel algorithms for normalizing transcript counts, and evaluated the effectiveness of the resulting normalizations. For this purpose we defined two novel and mutually independent metrics: (1) the number of “uniform” genes (genes whose normalized expression levels have a sufficiently low coefficient of variation), and (2) low Spearman correlation between normalized expression profiles of gene pairs. We also define four novel algorithms, one of which explicitly maximizes the number of uniform genes, and compared the performance of all fifteen algorithms. The two most commonly used methods (scaling to a fixed total value, or equalizing the expression of certain ‘housekeeping’ genes) yielded particularly poor results, surpassed even by normalization based on randomly selected gene sets. Conversely, seven of the algorithms approached what appears to be optimal normalization. Three of these algorithms rely on the identification of “ubiquitous” genes: genes expressed in all the samples studied, but never at very high or very low levels. We demonstrate that these include a “core” of genes expressed in many tissues in a mutually consistent pattern, which is suitable for use as an internal normalization guide. The new methods yield robustly normalized expression values, which is a prerequisite for the identification of differentially expressed and tissue-specific genes as potential biomarkers. PMID:24223126

  15. Scaling in reversible submonolayer deposition

    NASA Astrophysics Data System (ADS)

    Oliveira, T. J.; Aarão Reis, F. D. A.

    2013-06-01

    The scaling of island and monomer density, capture zone distributions (CZDs), and island size distributions (ISDs) in reversible submonolayer growth was studied using the Clarke-Vvedensky model. An approach based on rate-equation results for irreversible aggregation (IA) models is extended to predict several scaling regimes in square and triangular lattices, in agreement with simulation results. Consistently with previous works, a regime I with fractal islands is observed at low temperatures, corresponding to IA with critical island size i=1, and a crossover to a second regime appears as the temperature is increased to ɛR2/3˜1, where ɛ is the single bond detachment probability and R is the diffusion-to-deposition ratio. In the square (triangular) lattice, a regime with scaling similar to IA with i=3 (i=2) is observed after that crossover. In the triangular lattice, a subsequent crossover to an IA regime with i=3 is observed, which is explained by the recurrence properties of random walks in two-dimensional lattices, which is beyond the mean-field approaches. At high temperatures, a crossover to a fully reversible regime is observed, characterized by a large density of small islands, a small density of very large islands, and total island and monomer densities increasing with temperature, in contrast to IA models. CZDs and ISDs with Gaussian right tails appear in all regimes for R˜107 or larger, including the fully reversible regime, where the CZDs are bimodal. This shows that the Pimpinelli-Einstein approach for IA explains the main mechanisms for the large islands to compete for free adatom aggregation in the reversible model, and may be the reason for its successful application to a variety of materials and growth conditions.

  16. Proposing a tornado watch scale

    NASA Astrophysics Data System (ADS)

    Mason, Jonathan Brock

    This thesis provides an overview of language used in tornado safety recommendations from various sources, along with developing a rubric for scaled tornado safety recommendations, and subsequent development and testing of a tornado watch scale. The rubric is used to evaluate tornado refuge/shelter adequacy responses of Tuscaloosa residents gathered following the April 27, 2011 Tuscaloosa, Alabama EF4 tornado. There was a significant difference in the counts of refuge adequacy for Tuscaloosa residents when holding the locations during the April 27th tornado constant and comparing adequacy ratings for weak (EF0-EF1), strong (EF2-EF3) and violent (EF4-EF5) tornadoes. There was also a significant difference when comparing future tornado refuge plans of those same participants to the adequacy ratings for weak, strong and violent tornadoes. The tornado refuge rubric is then revised into a six-class, hierarchical Tornado Watch Scale (TWS) from Level 0 to Level 5 based on the likelihood of high-impact or low-impact severe weather events containing weak, strong or violent tornadoes. These levels represent maximum expected tornado intensity and include tornado safety recommendations from the tornado refuge rubric. Audio recordings similar to those used in current National Oceanic and Atmospheric Administration (NOAA) weather radio communications were developed to correspond to three levels of the TWS, a current Storm Prediction Center (SPC) tornado watch and a particularly dangerous situation (PDS) tornado watch. These were then used in interviews of Alabama residents to determine how changes to the information contained in the watch statements would affect each participant's tornado safety actions and perception of event danger. Results from interview participants (n=38) indicate a strong preference (97.37%) for the TWS when compared to current tornado watch and PDS tornado watch statements. Results also show the TWS elicits more adequate safety decisions from participants

  17. Scaling on a limestone flooring

    NASA Astrophysics Data System (ADS)

    Carmona-Quiroga, P. M.; Blanco-Varela, M. T.; Martínez-Ramírez, S.

    2012-04-01

    Natural stone can be use on nearly every surface, inside and outside buildings, but decay is more commonly reported from the ones exposed to outdoor aggressively conditions. This study instead, is an example of limestone weathering of uncertain origin in the interior of a residential building. The stone, used as flooring, started to exhibit loss of material in the form of scaling. These damages were observed before the building, localized in the South of Spain (Málaga), was inhabited. Moreover, according to the company the limestone satisfies the following European standards UNE-EN 1341: 2002, UNE-EN 1343: 2003; UNE-EN 12058: 2004 for floorings. Under these circumstances the main objective of this study was to assess the causes of this phenomenon. For this reason the composition of the mortar was determined and the stone was characterized from a mineralogical and petrological point of view. The last material, which is a fossiliferous limestone from Egypt with natural fissure lines, is mainly composed of calcite, being quartz, kaolinite and apatite minor phases. Moreover, under different spectroscopic and microscopic techniques (FTIR, micro-Raman, SEM-EDX, etc) samples of the weathered, taken directly from the buildings, and unweathered limestone tiles were examined and a new mineralogical phase, trona, was identified at scaled areas which are connected with the natural veins of the stone. In fact, through BSE-mapping the presence of sodium has been detected in these veins. This soluble sodium carbonate would was dissolved in the natural waters from which limestone was precipitated and would migrate with the ascendant capilar humidity and crystallized near the surface of the stone starting the scaling phenomenon which in historic masonry could be very damaging. Therefore, the weathering of the limestone would be related with the hygroscopic behaviour of this salt, but not with the constructive methods used. This makes the limestone unable to be used on restoration

  18. Scale-free convection theory

    NASA Astrophysics Data System (ADS)

    Pasetto, Stefano; Chiosi, Cesare; Cropper, Mark; Grebel, Eva K.

    2015-08-01

    Convection is one of the fundamental mechanism to transport energy, e.g., in planetology, oceanography as well as in astrophysics where stellar structure customarily described by the mixing-length theory, which makes use of the mixing-length scale parameter to express the convective flux, velocity, and temperature gradients of the convective elements and stellar medium. The mixing-length scale is taken to be proportional to the local pressure scale height of the star, and the proportionality factor (the mixing-length parameter) must be determined by comparing the stellar models to some calibrator, usually the Sun.No strong arguments exist to claim that the mixing-length parameter is the same in all stars and all evolutionary phases. Because of this, all stellar models in literature are hampered by this basic uncertainty.In a recent paper (Pasetto et al 2014) we presented the first fully analytical scale-free theory of convection that does not require the mixing-length parameter. Our self-consistent analytical formulation of convection determines all the properties of convection as a function of the physical behaviour of the convective elements themselves and the surrounding medium (being it a either a star, an ocean, a primordial planet). The new theory of convection is formulated starting from a conventional solution of the Navier-Stokes/Euler equations, i.e. the Bernoulli equation for a perfect fluid, but expressed in a non-inertial reference frame co-moving with the convective elements. In our formalism, the motion of convective cells inside convective-unstable layers is fully determined by a new system of equations for convection in a non-local and time dependent formalism.We obtained an analytical, non-local, time-dependent solution for the convective energy transport that does not depend on any free parameter. The predictions of the new theory in astrophysical environment are compared with those from the standard mixing-length paradigm in stars with

  19. Scale problem in wormhole physics

    SciTech Connect

    Kim, J. E.; Lee, K.

    1989-07-03

    Wormhole physics from the quantum thoery of gravity coupled to the second-rank antisymmetric tensor or Goldstone-boson fields leads to an effective potential for these fields. The cosmological energy-density bound is shown to put an upper bound on the cosmological constant which wormhole physics can make zero. This upper bound, of order 10/sup 11/ GeV, is far smaller than the Planck scale and barely compatible with the possible cosmological constant arising from grand unified theories. In addition, the effect of wormholes on the axion for the strong /ital CP/ problem is discussed.

  20. Scaling laws for bubbling bifurcations

    NASA Astrophysics Data System (ADS)

    González-Tokman, Cecilia; Hunt, Brian R.

    2009-11-01

    We establish rigorous scaling laws for the average bursting time for bubbling bifurcations of an invariant manifold, assuming the dynamics within the manifold to be uniformly hyperbolic. This type of global bifurcation appears in nearly synchronized systems, and is conjectured to be typical among those breaking the invariance of an asymptotically stable hyperbolic invariant manifold. We consider bubbling precipitated by generic bifurcations of a fixed point in both symmetric and non-symmetric systems with a codimension one invariant manifold, and discuss their extension to bifurcations of periodic points. We also discuss generalizations to invariant manifolds with higher codimension, and to systems with random noise.

  1. The NIST Length Scale Interferometer

    PubMed Central

    Beers, John S.; Penzes, William B.

    1999-01-01

    The National Institute of Standards and Technology (NIST) interferometer for measuring graduated length scales has been in use since 1965. It was developed in response to the redefinition of the meter in 1960 from the prototype platinum-iridium bar to the wavelength of light. The history of the interferometer is recalled, and its design and operation described. A continuous program of modernization by making physical modifications, measurement procedure changes and computational revisions is described, and the effects of these changes are evaluated. Results of a long-term measurement assurance program, the primary control on the measurement process, are presented, and improvements in measurement uncertainty are documented.

  2. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  3. New Scalings in Nuclear Fragmentation

    SciTech Connect

    Bonnet, E.; Bougault, R.; Galichet, E.; Gagnon-Moisan, F.; Guinet, D.; Lautesse, P.; Marini, P.; Parlog, M.

    2010-10-01

    Fragment partitions of fragmenting hot nuclei produced in central and semiperipheral collisions have been compared in the excitation energy region 4-10 MeV per nucleon where radial collective expansion takes place. It is shown that, for a given total excitation energy per nucleon, the amount of radial collective energy fixes the mean fragment multiplicity. It is also shown that, at a given total excitation energy per nucleon, the different properties of fragment partitions are completely determined by the reduced fragment multiplicity (i.e., normalized to the source size). Freeze-out volumes seem to play a role in the scalings observed.

  4. Drift-Scale Radionuclide Transport

    SciTech Connect

    J. Houseworth

    2004-09-22

    The purpose of this model report is to document the drift scale radionuclide transport model, taking into account the effects of emplacement drifts on flow and transport in the vicinity of the drift, which are not captured in the mountain-scale unsaturated zone (UZ) flow and transport models ''UZ Flow Models and Submodels'' (BSC 2004 [DIRS 169861]), ''Radionuclide Transport Models Under Ambient Conditions'' (BSC 2004 [DIRS 164500]), and ''Particle Tracking Model and Abstraction of Transport Process'' (BSC 2004 [DIRS 170041]). The drift scale radionuclide transport model is intended to be used as an alternative model for comparison with the engineered barrier system (EBS) radionuclide transport model ''EBS Radionuclide Transport Abstraction'' (BSC 2004 [DIRS 169868]). For that purpose, two alternative models have been developed for drift-scale radionuclide transport. One of the alternative models is a dual continuum flow and transport model called the drift shadow model. The effects of variations in the flow field and fracture-matrix interaction in the vicinity of a waste emplacement drift are investigated through sensitivity studies using the drift shadow model (Houseworth et al. 2003 [DIRS 164394]). In this model, the flow is significantly perturbed (reduced) beneath the waste emplacement drifts. However, comparisons of transport in this perturbed flow field with transport in an unperturbed flow field show similar results if the transport is initiated in the rock matrix. This has led to a second alternative model, called the fracture-matrix partitioning model, that focuses on the partitioning of radionuclide transport between the fractures and matrix upon exiting the waste emplacement drift. The fracture-matrix partitioning model computes the partitioning, between fractures and matrix, of diffusive radionuclide transport from the invert (for drifts without seepage) into the rock water. The invert is the structure constructed in a drift to provide the floor of the

  5. Multi-scale Shock Technique

    2009-08-01

    The code to be released is a new addition to the LAMMPS molecular dynamics code. LAMMPS is developed and maintained by Sandia, is publicly available, and is used widely by both natioanl laboratories and academics. The new addition to be released enables LAMMPS to perform molecular dynamics simulations of shock waves using the Multi-scale Shock Simulation Technique (MSST) which we have developed and has been previously published. This technique enables molecular dynamics simulations of shockmore » waves in materials for orders of magnitude longer timescales than the direct, commonly employed approach.« less

  6. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1929-01-01

    Modified propeller and spinner in Full-Scale Tunnel (FST) model. On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel. 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow. This model can be constructed in a comparatively short time, using 2 by 4 framing with matched sheathing inside, and where circular sections are desired they can be obtained by nailing sheet metal to wooden ribs, which can be cut on the band saw. It is estimated that three months will be required for the construction and testing of such a model and that the cost will be approximately three thousand dollars, one thousand dollars of which will be for the motors. No suitable location appears to exist in any of our present buildings, and it may be necessary to build it outside and cover it with a roof.' George Lewis responded immediately (June 27) granting the authority to proceed. He urged Langley to expedite construction and to employ extra carpenters if necessary. Funds for the model came from the FST project

  7. Identifying characteristic scales in the human genome

    NASA Astrophysics Data System (ADS)

    Carpena, P.; Bernaola-Galván, P.; Coronado, A. V.; Hackenberg, M.; Oliver, J. L.

    2007-03-01

    The scale-free, long-range correlations detected in DNA sequences contrast with characteristic lengths of genomic elements, being particularly incompatible with the isochores (long, homogeneous DNA segments). By computing the local behavior of the scaling exponent α of detrended fluctuation analysis (DFA), we discriminate between sequences with and without true scaling, and we find that no single scaling exists in the human genome. Instead, human chromosomes show a common compositional structure with two characteristic scales, the large one corresponding to the isochores and the other to small and medium scale genomic elements.

  8. How resilient are resilience scales? The Big Five scales outperform resilience scales in predicting adjustment in adolescents.

    PubMed

    Waaktaar, Trine; Torgersen, Svenn

    2010-04-01

    This study's aim was to determine whether resilience scales could predict adjustment over and above that predicted by the five-factor model (FFM). A sample of 1,345 adolescents completed paper-and-pencil scales on FFM personality (Hierarchical Personality Inventory for Children), resilience (Ego-Resiliency Scale [ER89] by Block & Kremen, the Resilience Scale [RS] by Wagnild & Young) and adaptive behaviors (California Healthy Kids Survey, UCLA Loneliness Scale and three measures of school adaptation). The results showed that the FFM scales accounted for the highest proportion of variance in disturbance. For adaptation, the resilience scales contributed as much as the FFM. In no case did the resilience scales outperform the FFM by increasing the explained variance. The results challenge the validity of the resilience concept as an indicator of human adaptation and avoidance of disturbance, although the concept may have heuristic value in combining favorable aspects of a person's personality endowment. PMID:19961558

  9. Family health climate scale (FHC-scale): development and validation

    PubMed Central

    2014-01-01

    Background The family environment is important for explaining individual health behaviour. While previous research mostly focused on influences among family members and dyadic interactions (parent-child), the purpose of this study was to develop a new measure, the Family Health Climate Scale (FHC-Scale), using a family-based approach. The FHC is an attribute of the whole family and describes an aspect of the family environment that is related to health and health behaviour. Specifically, a questionnaire measuring the FHC (a) for nutrition (FHC-NU) and (b) for activity behaviour (FHC-PA) was developed and validated. Methods In Study 1 (N = 787) the FHC scales were refined and validated. The sample was randomly divided into two subsamples. With random sample I exploratory factor analyses were conducted and items were selected according to their psychometric quality. In a second step, confirmatory factor analyses were conducted using the random sample II. In Study 2 (N = 210 parental couples) the construct validity was tested by correlating the FHC to self-determined motivation of healthy eating and physical activity as well as the families’ food environment and joint physical activities. Results Exploratory factor analyses with random sample I (Study 1) revealed a four (FHC-NU) and a three (FHC-PA) factor model. These models were cross-validated with random sample II and demonstrated an acceptable fit [FHC-PA: χ2 = 222.69, df = 74, p < .01; χ2/df = 3.01; CFI = .96; SRMR = .04; RMSEA = .07, CI .06/.08; FHC-NU: χ2 = 278.30, df = 113, p < .01, χ2/df = 2.46, CFI = .96; SRMR = .04; RMSEA = .06, CI .05/.07]. The perception of FHC correlated (p < .01) with the intrinsic motivation of healthy eating (r = .42) and physical activity (r = .56). Moreover, parental perceptions of FHC-NU correlated with household soft drink availability (r = -.31) and perceptions of FHC-PA with the frequency of

  10. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1931-01-01

    Wing and nacelle set-up in Full-Scale Tunnel (FST). The NACA conducted drag tests in 1931 on a P3M-1 nacelle which were presented in a special report to the Navy. Smith DeFrance described this work in the report's introduction: 'Tests were conducted in the full-scale wind tunnel on a five to four geared Pratt and Whitney Wasp engine mounted in a P3M-1 nacelle. In order to simulate the flight conditions the nacelle was assembled on a 15-foot span of wing from the same airplane. The purpose of the tests was to improve the cooling of the engine and to reduce the drag of the nacelle combination. Thermocouples were installed at various points on the cylinders and temperature readings were obtained from these by the power plants division. These results will be reported in a memorandum by that division. The drag results, which are covered by this memorandum, were obtained with the original nacelle condition as received from the Navy with the tail of the nacelle modified, with the nose section of the nacelle modified, with a Curtiss anti-drag ring attached to the engine, with a Type G ring developed by the N.A.C.A., and with a Type D cowling which was also developed by the N.A.C.A.' (p. 1)

  11. Scaling characteristics of topographic depressions

    NASA Astrophysics Data System (ADS)

    Le, P. V.; Kumar, P.

    2013-12-01

    Topographic depressions, areas of no lateral surface flow, are ubiquitous characteristic of land surface that control many ecosystem and biogeochemical processes. Landscapes with high density of depressions increase the surface storage capacity, whereas lower depression density increase runoff, thus influencing soil moisture states, hydrologic connectivity and the climate--soil--vegetation interactions. With the widespread availability of high resolution LiDAR based digital elevation model (lDEM) data, it is now possible to identify and characterize the structure of the spatial distribution of topographic depressions for incorporation in ecohydrologic and biogeochemical studies. Here we use lDEM data to document the prevalence and patterns of topographic depressions across five different landscapes in the United States and quantitatively characterize the distribution of attributes, such as surface area, storage volume, and the distance to the nearest neighbor. Through the use of a depression identification algorithm, we show that these distribution attributes follow scaling laws indicative of a fractal structure in which a large fraction of land surface areas can consist of high number of topographic depressions, accounting for 4 to 200 mm of depression storage. This implies that the impacts of small-scale topographic depressions in the fractal landscapes on the redistribution of surface energy fluxes, evaporation, and hydrologic connectivity are quite significant.

  12. The SCALE-UP Project

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    2015-03-01

    The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).

  13. Scaling Behavior of Firm Growth

    NASA Astrophysics Data System (ADS)

    Stanley, Michael H. R.; Nunes Amaral, Luis A.; Buldyrev, Sergey V.; Havlin, Shlomo; Leschhorn, Heiko; Maass, Philipp; Salinger, Michael A.; Stanley, H. Eugene

    1996-03-01

    The theory of the firm is of considerable interest in economics. The standard microeconomic theory of the firm is largely a static model and has thus proved unsatisfactory for addressing inherently dynamic issues such as the growth of economies. In recent years, many have attempted to develop richer models that provide a more accurate representation of firm dynamics due to learning, innovative effort, and the development of organizational infrastructure. The validity of these new, inherently dynamic theories depends on their consistency with the statistical properties of firm growth, e.g. the relationship between growth rates and firm size. Using the Compustat database over the time period 1975-1991, we find: (i) the distribution of annual growth rates for firms with approximately the same sales displays an exponential form with the logarithm of growth rate, and (ii) the fluctuations in the growth rates --- measured by the width of this distribution --- scale as a power law with the firm sales. We place these findings of scaling behavior in the context of conventional economics by considering firm growth dynamics with temporal correlations and also, by considering a hierarchical organization of the departments of a firm.

  14. The weak scale from BBN

    NASA Astrophysics Data System (ADS)

    Hall, Lawrence J.; Pinner, David; Ruderman, Joshua T.

    2014-12-01

    The measured values of the weak scale, v, and the first generation masses, m u, d, e , are simultaneously explained in the multiverse, with all these parameters scanning independently. At the same time, several remarkable coincidences are understood. Small variations in these parameters away from their measured values lead to the instability of hydrogen, the instability of heavy nuclei, and either a hydrogen or a helium dominated universe from Big Bang Nucleosynthesis. In the 4d parameter space of ( m u , m d , m e , v), catastrophic boundaries are reached by separately increasing each parameter above its measured value by a factor of (1.4, 1.3, 2.5, ˜ 5), respectively. The fine-tuning problem of the weak scale in the Standard Model is solved: as v is increased beyond the observed value, it is impossible to maintain a significant cosmological hydrogen abundance for any values of m u, d, e that yield both hydrogen and heavy nuclei stability.

  15. TeraScale Supernova Initiative

    NASA Astrophysics Data System (ADS)

    Mezzacappa, A.; TeraScale Supernova Initiative Collaboration

    2002-05-01

    The TeraScale Supernova Initiative is a national collaboration centered at the Oak Ridge National Laboratory and involves eight universities. TSI has as its central focus to ascertain the explosion mechanism(s) for core collapse supernovae and to understand and predict their associated phenomenology, including neutrino signatures, gravitational radiation emission, and nucleosynthesis. TSI is an interdisciplinary effort of astrophysicists, nuclear physicists, applied mathematicians, and computer scientists. Multidimensional hydrodynamics, magnetohydrodynamics, and radiation hydrodynamics simulations that implement state of the art nuclear and weak interaction physics are planned in order to understand the roles of neutrino transport, stellar convection and rotation, and magnetic fields in the supernova mechanism. Scalable algorithms for the solution of the large sparse linear systems of equations that arise in radiation transport applications and a customized collaborative visualization environment will be developed also. TSI's latest results and future efforts will be discussed. The TeraScale Supernova Initiative is funded by grants from the DoE (1) High Energy and Nuclear Physics and (2) Mathematics, Information, and Computational Sciences SciDAC Programs.

  16. Engineering scale electrostatic enclosure demonstration

    SciTech Connect

    Meyer, L.C.

    1993-09-01

    This report presents results from an engineering scale electrostatic enclosure demonstration test. The electrostatic enclosure is part of an overall in-depth contamination control strategy for transuranic (TRU) waste recovery operations. TRU contaminants include small particles of plutonium compounds associated with defense-related waste recovery operations. Demonstration test items consisted of an outer Perma-con enclosure, an inner tent enclosure, and a ventilation system test section for testing electrostatic curtain devices. Three interchangeable test fixtures that could remove plutonium from the contaminated dust were tested in the test section. These were an electret filter, a CRT as an electrostatic field source, and an electrically charged parallel plate separator. Enclosure materials tested included polyethylene, anti-static construction fabric, and stainless steel. The soil size distribution was determined using an eight stage cascade impactor. Photographs of particles containing plutonium were obtained with a scanning electron microscope (SEM). The SEM also provided a second method of getting the size distribution. The amount of plutonium removed from the aerosol by the electrostatic devices was determined by radiochemistry from input and output aerosol samplers. The inner and outer enclosures performed adequately for plutonium handling operations and could be used for full scale operations.

  17. Scaling device for photographic images

    NASA Technical Reports Server (NTRS)

    Rivera, Jorge E. (Inventor); Youngquist, Robert C. (Inventor); Cox, Robert B. (Inventor); Haskell, William D. (Inventor); Stevenson, Charles G. (Inventor)

    2005-01-01

    A scaling device projects a known optical pattern into the field of view of a camera, which can be employed as a reference scale in a resulting photograph of a remote object, for example. The device comprises an optical beam projector that projects two or more spaced, parallel optical beams onto a surface of a remotely located object to be photographed. The resulting beam spots or lines on the object are spaced from one another by a known, predetermined distance. As a result, the size of other objects or features in the photograph can be determined through comparison of their size to the known distance between the beam spots. Preferably, the device is a small, battery-powered device that can be attached to a camera and employs one or more laser light sources and associated optics to generate the parallel light beams. In a first embodiment of the invention, a single laser light source is employed, but multiple parallel beams are generated thereby through use of beam splitting optics. In another embodiment, multiple individual laser light sources are employed that are mounted in the device parallel to one another to generate the multiple parallel beams.

  18. Scaling analysis of affinity propagation.

    PubMed

    Furtlehner, Cyril; Sebag, Michèle; Zhang, Xiangliang

    2010-06-01

    We analyze and exploit some scaling properties of the affinity propagation (AP) clustering algorithm proposed by Frey and Dueck [Science 315, 972 (2007)]. Following a divide and conquer strategy we setup an exact renormalization-based approach to address the question of clustering consistency, in particular, how many cluster are present in a given data set. We first observe that the divide and conquer strategy, used on a large data set hierarchically reduces the complexity O(N2) to O(N((h+2)/(h+1))) , for a data set of size N and a depth h of the hierarchical strategy. For a data set embedded in a d -dimensional space, we show that this is obtained without notably damaging the precision except in dimension d=2 . In fact, for d larger than 2 the relative loss in precision scales such as N((2-d)/(h+1)d). Finally, under some conditions we observe that there is a value s* of the penalty coefficient, a free parameter used to fix the number of clusters, which separates a fragmentation phase (for ss*) of the underlying hidden cluster structure. At this precise point holds a self-similarity property which can be exploited by the hierarchical strategy to actually locate its position, as a result of an exact decimation procedure. From this observation, a strategy based on AP can be defined to find out how many clusters are present in a given data set. PMID:20866473

  19. Scaling analysis of stock markets

    NASA Astrophysics Data System (ADS)

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.

  20. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  1. Scaling and Criticality in Large-Scale Neuronal Activity

    NASA Astrophysics Data System (ADS)

    Linkenkaer-Hansen, K.

    The human brain during wakeful rest spontaneously generates large-scale neuronal network oscillations at around 10 and 20 Hz that can be measured non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). In this chapter, spontaneous oscillations are viewed as the outcome of a self-organizing stochastic process. The aim is to introduce the general prerequisites for stochastic systems to evolve to the critical state and to explain their neurophysiological equivalents. I review the recent evidence that the theory of self-organized criticality (SOC) may provide a unifying explanation for the large variability in amplitude, duration, and recurrence of spontaneous network oscillations, as well as the high susceptibility to perturbations and the long-range power-law temporal correlations in their amplitude envelope.

  2. Scale Model Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Canacci, Victor A.

    1997-01-01

    NASA Lewis Research Center's Icing Research Tunnel (IRT) is the world's largest refrigerated wind tunnel and one of only three icing wind tunnel facilities in the United States. The IRT was constructed in the 1940's and has been operated continually since it was built. In this facility, natural icing conditions are duplicated to test the effects of inflight icing on actual aircraft components as well as on models of airplanes and helicopters. IRT tests have been used successfully to reduce flight test hours for the certification of ice-detection instrumentation and ice protection systems. To ensure that the IRT will remain the world's premier icing facility well into the next century, Lewis is making some renovations and is planning others. These improvements include modernizing the control room, replacing the fan blades with new ones to increase the test section maximum velocity to 430 mph, installing new spray bars to increase the size and uniformity of the artificial icing cloud, and replacing the facility heat exchanger. Most of the improvements will have a first-order effect on the IRT's airflow quality. To help us understand these effects and evaluate potential improvements to the flow characteristics of the IRT, we built a modular 1/10th-scale aerodynamic model of the facility. This closed-loop scale-model pilot tunnel was fabricated onsite in the various shops of Lewis' Fabrication Support Division. The tunnel's rectangular sections are composed of acrylic walls supported by an aluminum angle framework. Its turning vanes are made of tubing machined to the contour of the IRT turning vanes. The fan leg of the tunnel, which transitions from rectangular to circular and back to rectangular cross sections, is fabricated of fiberglass sections. The contraction section of the tunnel is constructed from sheet aluminum. A 12-bladed aluminum fan is coupled to a turbine powered by high-pressure air capable of driving the maximum test section velocity to 550 ft

  3. SCALE-UP OF RAPID SMALL-SCALE ADSORPTION TESTS TO FIELD-SCALE ADSORBERS: THEORETICAL AND EXPERIMENTAL BASIS

    EPA Science Inventory

    Design of full-scale adsorption systems typically includes expensive and time-consuming pilot studies to simulate full-scale adsorber performance. Accordingly, the rapid small-scale column test (RSSCT) was developed and evaluated experimentally. The RSSCT can simulate months of f...

  4. Introduction to the time scale problem

    SciTech Connect

    Voter, A. F.

    2002-01-01

    As motivation for the symposium on extended-scale atomistic methods, I briefly discuss the time scale problem that plagues molecular dynamics simulations, some promising recent developments for circumventing the problem, and some remaining challenges.

  5. LANDSCAPE CONNECTIVITY: DIFFERENT FUNCTIONS AT DIFFERENT SCALES

    EPA Science Inventory

    Connectivity is more than corridors, and corridors are more than linear strips of habitat. ather, connectivity involves linkages of habitats, species, communities, and ecological processes at spatial scales ranging from fencerows to biomes, and at temporal scales ranging from dai...

  6. The Adaptive Multi-scale Simulation Infrastructure

    SciTech Connect

    Tobin, William R.

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  7. Small-Scale Rocket Motor Test

    NASA Video Gallery

    Engineers at NASA's Marshall Space Flight Center in Huntsville, Ala. successfully tested a sub-scale solid rocket motor on May 27. Testing a sub-scale version of a rocket motor is a cost-effective ...

  8. The AppScale Cloud Platform

    PubMed Central

    Krintz, Chandra

    2013-01-01

    AppScale is an open source distributed software system that implements a cloud platform as a service (PaaS). AppScale makes cloud applications easy to deploy and scale over disparate cloud fabrics, implementing a set of APIs and architecture that also makes apps portable across the services they employ. AppScale is API-compatible with Google App Engine (GAE) and thus executes GAE applications on-premise or over other cloud infrastructures, without modification. PMID:23828721

  9. Scale Interaction in a California precipitation event

    SciTech Connect

    Leach, M. J., LLNL

    1997-09-01

    Heavy rains and severe flooding frequently plaque California. The heavy rains are most often associated with large scale cyclonic and frontal systems, where large scale dynamics and large moisture influx from the tropical Pacific interact. however, the complex topography along the west coast also interacts with the large scale influences, producing local areas with heavier precipitation. In this paper, we look at some of the local interactions with the large scale.

  10. Size Scaling in Visual Pattern Recognition

    ERIC Educational Resources Information Center

    Larsen, Axel; Bundesen, Claus

    1978-01-01

    Human visual recognition on the basis of shape but regardless of size was investigated by reaction time methods. Results suggested two processes of size scaling: mental-image transformation and perceptual-scale transformation. Image transformation accounted for matching performance based on visual short-term memory, whereas scale transformation…

  11. Friendship Quality Scale: Conceptualization, Development and Validation

    ERIC Educational Resources Information Center

    Thien, Lei Mee; Razak, Nordin Abd; Jamil, Hazri

    2012-01-01

    The purpose of this study is twofold: (1) to initialize a new conceptualization of positive feature based Friendship Quality (FQUA) scale on the basis of four dimensions: Closeness, Help, Acceptance, and Safety; and (2) to develop and validate FQUA scale in the form of reflective measurement model. The scale development and validation procedures…

  12. 27 CFR 19.183 - Scale tanks.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Scale tanks. 19.183... Tank Requirements § 19.183 Scale tanks. (a) Except as otherwise provided in paragraph (b) of this..., the tank must be mounted on scales and the contents of the tank must be determined by weight....

  13. 27 CFR 19.183 - Scale tanks.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Scale tanks. 19.183... Tank Requirements § 19.183 Scale tanks. (a) Except as otherwise provided in paragraph (b) of this..., the tank must be mounted on scales and the contents of the tank must be determined by weight....

  14. 27 CFR 19.183 - Scale tanks.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Scale tanks. 19.183... Tank Requirements § 19.183 Scale tanks. (a) Except as otherwise provided in paragraph (b) of this..., the tank must be mounted on scales and the contents of the tank must be determined by weight....

  15. 27 CFR 19.183 - Scale tanks.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Scale tanks. 19.183... Tank Requirements § 19.183 Scale tanks. (a) Except as otherwise provided in paragraph (b) of this..., the tank must be mounted on scales and the contents of the tank must be determined by weight....

  16. An Aesthetic Value Scale of the Rorschach.

    ERIC Educational Resources Information Center

    Insua, Ana Maria

    1981-01-01

    An aesthetic value scale of the Rorschach cards was built by the successive interval method. This scale was compared with the ratings obtained by means of the Semantic Differential Scales and was found to successfully differentiate sexes in their judgment of card attractiveness. (Author)

  17. Development of Capstone Project Attitude Scales

    ERIC Educational Resources Information Center

    Bringula, Rex P.

    2015-01-01

    This study attempted to develop valid and reliable Capstone Project Attitude Scales (CPAS). Among the scales reviewed, the Modified Fennema-Shermann Mathematics Attitude Scales was adapted in the construction of the CPAS. Usefulness, Confidence, and Gender View were the three subscales of the CPAS. Four hundred sixty-three students answered the…

  18. Lethality of Suicide Attempt Rating Scale.

    ERIC Educational Resources Information Center

    Smith, K.; And Others

    1984-01-01

    Presents an 11-point scale for measuring the degree of lethality of suicide attempts. The scale has nine example "anchors" and uses the relative lethality of an extensive table of drugs. The scale can be used reliably by nonmedical personnel with no prior training. (Author/BL)

  19. Scaling laws for laser-induced filamentation

    NASA Astrophysics Data System (ADS)

    Zhokhov, P. A.; Zheltikov, A. M.

    2014-04-01

    Despite all the complexity of the underlying nonlinear physics, the filamentation of ultrashort optical field wave forms is shown to obey a set of physically transparent scaling laws. This scaling is applicable within a remarkably broad range of laser powers, pulse widths, gas pressures, and propagation paths, suggesting specific recipes for the power scaling of filamentation-based pulse compression.

  20. Measuring Scale Invariance between and within Subjects.

    ERIC Educational Resources Information Center

    Benson, Jeri; Hocevar, Dennis

    The present paper represents a demonstration of how LISREL V can be used to investigate scale invariance (1) across time (its relationship to test-retest reliability), and (2) across groups. Five criteria were established to test scale invariance across time and four criteria were established to test scale invariance across groups. Using the…

  1. 76 FR 18348 - Required Scale Tests

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-04

    ... Register on January 20, 2011 (76 FR 3485), defining required scale tests. That document incorrectly defined... Tests AGENCY: Grain Inspection, Packers and Stockyards Administration. ACTION: Correcting amendments... packer using such scales may use the scales within a 6-month period following each test. * * * * * Alan...

  2. Automatic scale selection for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Bayram, Ersin; Wyatt, Christopher L.; Ge, Yaorong

    2001-07-01

    The scale of interesting structures in medical images is space variant because of partial volume effects, spatial dependence of resolution in many imaging modalities, and differences in tissue properties. Existing segmentation methods either apply a single scale to the entire image or try fine-to-coarse/coarse-to-fine tracking of structures over multiple scales. While single scale approaches fail to fully recover the perceptually important structures, multi-scale methods have problems in providing reliable means to select proper scales and integrating information over multiple scales. A recent approach proposed by Elder and Zucker addresses the scale selection problem by computing a minimal reliable scale for each image pixel. The basic premise of this approach is that, while the scale of structures within an image vary spatially, the imaging system is fixed. Hence, sensor noise statistics can be calculated. Based on a model of edges to be detected, and operators to be used for detection, one can locally compute a unique minimal reliable scale at which the likelihood of error due to sensor noise is less than or equal to a predetermined threshold. In this paper, we improve the segmentation method based on the minimal reliable scale selection and evaluate its effectiveness with both simulated and actual medical data.

  3. A Scale To Assess African American Acculturation.

    ERIC Educational Resources Information Center

    Snowden, Lonnie R.; Hines, Alice M.

    1999-01-01

    Investigated an acculturation scale designed for use in the African-American population. Responses from more than 900 African Americans generally indicate an African-American orientation within the sample, although there are notable variations on all 10 scale items. Discusses evidence for scale reliability and validity. (SLD)

  4. Developing a Sense of Scale: Looking Backward

    ERIC Educational Resources Information Center

    Jones, M. Gail; Taylor, Amy R.

    2009-01-01

    Although scale has been identified as one of four major interdisciplinary themes that cut across the science domains by the American Association for the Advancement of Science (1989), we are only beginning to understand how students learn and apply scale concepts. Early research on learning scale tended to focus on perceptions of linear distances,…

  5. Behavioral Observation Scales for Performance Appraisal Purposes

    ERIC Educational Resources Information Center

    Latham, Gary P.; Wexley, Kenneth N.

    1977-01-01

    This research attempts to determine whether Behavioral Observation Scales (BOS) could be improved by developing them through quantitative methods. The underlying assumption was that developing composite scales with greater internal consistency might improve their generalizability as evidenced by the cross-validation coefficients of scales based on…

  6. Quantitative Scaling of Magnetic Avalanches.

    PubMed

    Durin, G; Bohn, F; Corrêa, M A; Sommer, R L; Le Doussal, P; Wiese, K J

    2016-08-19

    We provide the first quantitative comparison between Barkhausen noise experiments and recent predictions from the theory of avalanches for pinned interfaces, both in and beyond mean field. We study different classes of soft magnetic materials, including polycrystals and amorphous samples-which are characterized by long-range and short-range elasticity, respectively-both for thick and thin samples, i.e., with and without eddy currents. The temporal avalanche shape at fixed size as well as observables related to the joint distribution of sizes and durations are analyzed in detail. Both long-range and short-range samples with no eddy currents are fitted extremely well by the theoretical predictions. In particular, the short-range samples provide the first reliable test of the theory beyond mean field. The thick samples show systematic deviations from the scaling theory, providing unambiguous signatures for the presence of eddy currents. PMID:27588876

  7. Quantitative Scaling of Magnetic Avalanches

    NASA Astrophysics Data System (ADS)

    Durin, G.; Bohn, F.; Corrêa, M. A.; Sommer, R. L.; Le Doussal, P.; Wiese, K. J.

    2016-08-01

    We provide the first quantitative comparison between Barkhausen noise experiments and recent predictions from the theory of avalanches for pinned interfaces, both in and beyond mean field. We study different classes of soft magnetic materials, including polycrystals and amorphous samples—which are characterized by long-range and short-range elasticity, respectively—both for thick and thin samples, i.e., with and without eddy currents. The temporal avalanche shape at fixed size as well as observables related to the joint distribution of sizes and durations are analyzed in detail. Both long-range and short-range samples with no eddy currents are fitted extremely well by the theoretical predictions. In particular, the short-range samples provide the first reliable test of the theory beyond mean field. The thick samples show systematic deviations from the scaling theory, providing unambiguous signatures for the presence of eddy currents.

  8. Matter perturbations in scaling cosmology

    NASA Astrophysics Data System (ADS)

    Fuño, A. Romero; Hipólito-Ricaldi, W. S.; Zimdahl, W.

    2016-04-01

    A suitable nonlinear interaction between dark matter with an energy density ρM and dark energy with an energy density ρX is known to give rise to a non-canonical scaling ρM ∝ ρXa-ξ, where ξ is a parameter which generally deviates from ξ = 3. Here, we present a covariant generalization of this class of models and investigate the corresponding perturbation dynamics. The resulting matter power spectrum for the special case of a time-varying Lambda model is compared with data from the Sloan Digital Sky Survey (SDSS) DR9 catalogue (Ahn et al.). We find a best-fitting value of ξ = 3.25 which corresponds to a decay of dark matter into the cosmological term. Our results are compatible with the Lambda Cold Dark Matter model at the 2σ confidence level.

  9. Bacterial Communities: Interactions to Scale.

    PubMed

    Stubbendieck, Reed M; Vargas-Bautista, Carol; Straight, Paul D

    2016-01-01

    In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities. PMID:27551280

  10. Significant Scales in Community Structure

    PubMed Central

    Traag, V. A.; Krings, G.; Van Dooren, P.

    2013-01-01

    Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of “significance” of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine “good” resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationality plays no role. PMID:24121597

  11. The Scaled Thermal Explosion Experiment

    SciTech Connect

    Wardell, J F; Maienschein, J L

    2002-07-05

    We have developed the Scaled Thermal Explosion Experiment (STEX) to provide a database of reaction violence from thermal explosion for explosives of interest. Such data are needed to develop, calibrate, and validate predictive capability for thermal explosions using simulation computer codes. A cylinder of explosive 25, 50 or 100 mm in diameter, is confined in a steel cylinder with heavy end caps, and heated under controlled conditions until reaction. Reaction violence is quantified through non-contact micropower impulse radar measurements of the cylinder wall velocity and by strain gauge data at reaction onset. Here we describe the test concept, design and diagnostic recording, and report results with HMX- and RDX-based energetic materials.

  12. Hypoallometric scaling in international collaborations

    NASA Astrophysics Data System (ADS)

    Hsiehchen, David; Espinoza, Magdalena; Hsieh, Antony

    2016-02-01

    Collaboration is a vital process and dominant theme in knowledge production, although the effectiveness of policies directed at promoting multinational research remains ambiguous. We examined approximately 24 million research articles published over four decades and demonstrated that the scaling of international publications to research productivity for each country obeys a universal and conserved sublinear power law. Inefficient mechanisms in transborder team dynamics or organization as well as increasing opportunity costs may contribute to the disproportionate growth of international collaboration rates with increasing productivity among nations. Given the constrained growth of international relationships, our findings advocate a greater emphasis on the qualitative aspects of collaborations, such as with whom partnerships are forged, particularly when assessing research and policy outcomes.

  13. Size Scaling of Static Friction

    NASA Astrophysics Data System (ADS)

    Braun, O. M.; Manini, Nicola; Tosatti, Erio

    2013-02-01

    Sliding friction across a thin soft lubricant film typically occurs by stick slip, the lubricant fully solidifying at stick, yielding and flowing at slip. The static friction force per unit area preceding slip is known from molecular dynamics (MD) simulations to decrease with increasing contact area. That makes the large-size fate of stick slip unclear and unknown; its possible vanishing is important as it would herald smooth sliding with a dramatic drop of kinetic friction at large size. Here we formulate a scaling law of the static friction force, which for a soft lubricant is predicted to decrease as fm+Δf/Aγ for increasing contact area A, with γ>0. Our main finding is that the value of fm, controlling the survival of stick slip at large size, can be evaluated by simulations of comparably small size. MD simulations of soft lubricant sliding are presented, which verify this theory.

  14. The Vocational Adaptation Rating Scales.

    PubMed

    Malgady, R G; Barcher, P R

    1982-01-01

    The Vocational Adaptation Rating Scales (VARS) were developed to provide a comprehensive assessment of maladaptive social behavior related to vocational success, but not directly measuring job performance of mentally retarded workers. Psychometric information derived from the VARS is useful for developing individualized educational plans (IEPs) for compliance with Public Law 94-142; for program, worker or curriculum evaluations; and for predicting placement of workers in vocational training. Research indicates that VARS scores are internally consistent, moderately correlated with other vocational measures, unbiased with respect to sex and age differences, and independent of IQ. Inter-rater reliability is acceptable, and VARS profiles are accurate predictors of level of sheltered workshop placement of mentally retarded workers, independent of IQ, sex and age. Unlike other instruments, the VARS offers a profile of social behavior in a vocational context. PMID:7168570

  15. Scaling laws in magnetohydrodynamic turbulence

    SciTech Connect

    Campanelli, Leonardo

    2004-10-15

    We analyze the decay laws of the kinetic and magnetic energies and the evolution of correlation lengths in freely decaying incompressible magnetohydrodynamic (MHD) turbulence. Scale invariance of MHD equations assures that, in the case of constant dissipation parameters (i.e., kinematic viscosity and resistivity) and null magnetic helicity, the kinetic and magnetic energies decay in time as E{approx}t{sup -1}, and the correlation lengths evolve as {xi}{approx}t{sup 1/2}. In the helical case, assuming that the magnetic field evolves towards a force-free state, we show that (in the limit of large magnetic Reynolds number) the magnetic helicity remains constant, and the kinetic and magnetic energies decay as E{sub v}{approx}t{sup -1} and E{sub B}{approx}t{sup -1/2} respectively, while both the kinetic and magnetic correlation lengths grow as {xi}{approx}t{sup 1/2}.

  16. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  17. Enabling department-scale supercomputing

    SciTech Connect

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  18. Approach toward Linear Scaling QMC

    NASA Astrophysics Data System (ADS)

    Clark, Bryan; Ceperley, David; de Sturler, Eric

    2007-03-01

    Quantum Monte Carlo simulations of fermions are currently done for relatively small system sizes, e.g., fewer than one thousand fermions. The most time-consuming part of the code for larger systems depends critically on the speed with which the ratio of a wavefunction for two different configurations can be evaluated. Most of the time goes into calculating the ratio of two determinants; this scales naively as O(n^3) operations. Work by Williamson, et al. (2) have improved the procedure for evaluating the elements of the Slater matrix, so it can be done in linear time. Our work involves developing methods to evaluate the ratio of these Slater determinants quickly. We compare a number of methods including work involving iterative techniques, sparse approximate inverses, and faster matrix updating.(2) A. J. Williamson, R.Q. Hood and J.C. Grossman, Phys. Rev. Lett. 87, 246406 (2001)

  19. Chip Scale Package Implementation Challenges

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    1998-01-01

    The JPL-led MicrotypeBGA Consortium of enterprises representing government agencies and private companies have jointed together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects. In the process of building the Consortium CSP test vehicles, many challenges were identified regarding various aspects of technology implementation. This paper will present our experience in the areas of technology implementation challenges, including design and building both standard and microvia boards, and assembly of two types of test vehicles. We also discuss the most current package isothermal aging to 2,000 hours at 100 C and 125 C and thermal cycling test results to 1,700 cycles in the range of -30 to 100 C.

  20. Strategies for power scaling VECSELs

    NASA Astrophysics Data System (ADS)

    Wang, Tsuei-Lian; Kaneda, Yushi; Hader, Jörg; Moloney, Jerome V.; Kunert, Bernardette; Stolz, Wolfgang; Koch, Stephan W.

    2012-03-01

    Strategies for power scaling VECSELs, including improving thermal management, increasing the quantum well gain/micro-cavity detuning that increases the threshold but increases roll-over temperature, and double-passing the excess pump via reflection from a metalized reflector at the back of a transparent distributed Bragg reflector (DBR) were studied. The influence of the heat spreader thickness and the pump profile on the temperature rise inside the active region was investigated using commercial finite element analysis software. Improvement was observed in optical efficiency of the VECSEL devices with a transparent DBR by double passing the pump light. Higher dissipated power at maximum output power was found in devices with larger spectral detuning between the quantum well gain and the micro-cavity detuning.

  1. Anisotropic scaling of magnetohydrodynamic turbulence.

    PubMed

    Horbury, Timothy S; Forman, Miriam; Oughton, Sean

    2008-10-24

    We present a quantitative estimate of the anisotropic power and scaling of magnetic field fluctuations in inertial range magnetohydrodynamic turbulence, using a novel wavelet technique applied to spacecraft measurements in the solar wind. We show for the first time that, when the local magnetic field direction is parallel to the flow, the spacecraft-frame spectrum has a spectral index near 2. This can be interpreted as the signature of a population of fluctuations in field-parallel wave numbers with a k(-2)_(||) spectrum but is also consistent with the presence of a "critical balance" style turbulent cascade. We also find, in common with previous studies, that most of the power is contained in wave vectors at large angles to the local magnetic field and that this component of the turbulence has a spectral index of 5/3. PMID:18999759

  2. Modeling biosilicification at subcellular scales.

    PubMed

    Javaheri, Narjes; Cronemberger, Carolina M; Kaandorp, Jaap A

    2013-01-01

    Biosilicification occurs in many organisms. Sponges and diatoms are major examples of them. In this chapter, we introduce a modeling approach that describes several biological mechanisms controlling silicification. Modeling biosilicification is a typical multiscale problem where processes at very different temporal and spatial scales need to be coupled: processes at the molecular level, physiological processes at the subcellular and cellular level, etc. In biosilicification morphology plays a fundamental role, and a spatiotemporal model is required. In the case of sponges, a particle simulation based on diffusion-limited aggregation is presented here. This model can describe fractal properties of silica aggregates in first steps of deposition on an organic template. In the case of diatoms, a reaction-diffusion model is introduced which can describe the concentrations of chemical components and has the possibility to include polymerization chain of reactions. PMID:24420712

  3. Gradient scaling for nonuniform meshes

    SciTech Connect

    Margolin, L.G.; Ruppel, H.M.; Demuth, R.B.

    1985-01-01

    This paper is concerned with the effect of nonuniform meshes on the accuracy of finite-difference calculations of fluid flow. In particular, when a simple shock propagates through a nonuniform mesh, one may fail to model the jump conditions across the shock even when the equations are differenced in manifestly conservative fashion. We develop an approximate dispersion analysis of the numerical equations and identify the source of the mesh dependency with the form of the artificial viscosity. We then derive an algebraic correction to the numerical equations - a scaling factor for the pressure gradient - to essentially eliminate the mesh dependency. We present several calculations to illustrate our theory. We conclude with an alternate interpretation of our results. 14 refs., 5 figs.

  4. Scaling behavior in interference lithography

    SciTech Connect

    Agayan, R.R.; Banyai, W.C.; Fernandez, A.

    1998-02-27

    Interference lithography is an emerging, technology that provides a means for achieving high resolution over large exposure areas (approximately 1 m{sup 2}) with virtually unlimited depth of field. One- and two-dimensional arrays of deep submicron structures can be created using near i-line wavelengths and standard resist processing. In this paper, we report on recent advances in the development of this technology, focusing, in particular, on how exposure latitude and resist profile scale with interference period We present structure width vs dose curves for periods ranging from 200 nm to 1 um, demonstrating that deep submicron structures can be generated with exposure latitudes exceeding 30%. Our experimental results are compared to simulations based on PROLITIV2.

  5. Small Scale High Speed Turbomachinery

    NASA Technical Reports Server (NTRS)

    London, Adam P. (Inventor); Droppers, Lloyd J. (Inventor); Lehman, Matthew K. (Inventor); Mehra, Amitav (Inventor)

    2015-01-01

    A small scale, high speed turbomachine is described, as well as a process for manufacturing the turbomachine. The turbomachine is manufactured by diffusion bonding stacked sheets of metal foil, each of which has been pre-formed to correspond to a cross section of the turbomachine structure. The turbomachines include rotating elements as well as static structures. Using this process, turbomachines may be manufactured with rotating elements that have outer diameters of less than four inches in size, and/or blading heights of less than 0.1 inches. The rotating elements of the turbomachines are capable of rotating at speeds in excess of 150 feet per second. In addition, cooling features may be added internally to blading to facilitate cooling in high temperature operations.

  6. Bacterial Communities: Interactions to Scale

    PubMed Central

    Stubbendieck, Reed M.; Vargas-Bautista, Carol; Straight, Paul D.

    2016-01-01

    In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities. PMID:27551280

  7. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  8. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).

  9. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST): 120-Foot Truss hoisting, one and two point suspension. In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)

  10. Small-scale field experiments accurately scale up to predict density dependence in reef fish populations at large scales

    PubMed Central

    Steele, Mark A.; Forrester, Graham E.

    2005-01-01

    Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats. PMID:16150721

  11. Small-scale field experiments accurately scale up to predict density dependence in reef fish populations at large scales.

    PubMed

    Steele, Mark A; Forrester, Graham E

    2005-09-20

    Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats. PMID:16150721

  12. Scaling Properties of Shoreline Change: Process Implications

    NASA Astrophysics Data System (ADS)

    Murray, A.; Lazarus, E.; Ashton, A. D.; Tebbens, S. F.; Burroughs, S. M.

    2011-12-01

    Using shoreline-change measurements of two open-ocean reaches of the North Carolina Outer Banks, U.S.A., we explore an existing premise that shoreline change on a sandy coast is a self-affine signal, wherein patterns of change are scale-invariant. Wavelet analysis confirms that the mean variance (spectral power) of shoreline change can be approximated by a power law at alongshore scales from tens of m up to a few kilometers. In some systems, a power law reflects the presence of a unifying process or interaction that spans the scales of the power law. Classic examples include turbulent fluids, networks of interacting faults/Earthquakes, and fluvially sculpted landscapes. However, an approximately linear portion of a spectrum in a log-log plot does not necessarily indicate a scale-free, dominant process, as the shoreline-change spectrum exemplifies; distinct processes dominate different scale ranges within the range of the approximate power law. Why an amalgamation scale-dependent processes often produces an approximately linear portion of a spectrum remains an intriguing question. The shoreline-change spectra also illustrates the point that deviations from approximate power-law scaling can also be interesting. At scales of kilometers to tens of kilometers, the spectra exhibit a maximum of the variance (not related to finite-domain-size effects). Both the magnitude of the variance in this broad peak, and the spatial scale at which that maximum occurs, increase when shoreline change is measured over longer time scales (up to decadal). The scaling relationship between the time and spatial scales of this peak suggest a large-scale diffusion of coastline shape (possibly driven by gradients in alongshore sediment flux related to large-scale coastline curvature). Recent analysis of shoreline curvature and change in curvature for shoreline changes spanning hurricane-related wave events shows that large-scale coastline-shape anti-diffusion can occur during extreme storms

  13. Developing a new apathy measurement scale: Dimensional Apathy Scale.

    PubMed

    Radakovic, Ratko; Abrahams, Sharon

    2014-11-30

    Apathy is both a symptom and syndrome prevalent in neurodegenerative disease, including motor system disorders, that affects motivation to display goal directed functions. Levy and Dubois (2006) suggested three apathetic subtypes, Cognitive, Emotional-affective and Auto-activation, all with discrete neural correlates and functional impairments. The aim of this study was to create a new apathy measure; the Dimensional Apathy Scale (DAS), which assesses apathetic subtypes and is suitable for use in patient groups with motor dysfunction. 311 healthy participants (mean=37.4, S.D.=15.0) completed a 45-item questionnaire. Horn's parallel analysis of principal factors and Exploratory Factor Analysis resulted in 4 factors (Executive, Emotional, Cognitive Initiation and Behavioural Initiation) that account for 28.9% of the total variance. Twenty four items were subsequently extracted to form 3 subscales--Executive, Emotional and Behavioural/Cognitive Initiation. The subscale items show good internal consistency reliability. A weak to moderate relationship was found with depression using Becks Depression Inventory II. The DAS is a well-constructed method for assessing multidimensional apathy suitable for application to investigate this syndrome in different disease pathologies. PMID:24972546

  14. Fluctuation scaling, Taylor's law, and crime.

    PubMed

    Hanley, Quentin S; Khatun, Suniya; Yosef, Amal; Dyer, Rachel-May

    2014-01-01

    Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026) while burglary exhibited a greater exponent (α = 1.292 ± 0.029) indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs) to 2.094 ± 0081 (Other Crimes). Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation. PMID:25271781

  15. Effects of scale on internal blast measurements

    NASA Astrophysics Data System (ADS)

    Granholm, R.; Sandusky, H.; Lee, R.

    2014-05-01

    This paper presents a comparative study between large and small-scale internal blast experiments with the goal of using the small-scale analog for energetic performance evaluation. In the small-scale experiment, highly confined explosive samples <0.5 g were subjected to the output from a PETN detonator while enclosed in a 3-liter chamber. Large-scale tests up to 23 kg were unconfined and released in a chamber with a factor of 60,000 increase in volume. The comparative metric in these experiments is peak quasi-static overpressure, with the explosive sample expressed as sample energy/chamber volume, which normalizes measured pressures across scale. Small-scale measured pressures were always lower than the large-scale measurements, because of heat-loss to the high confinement inherent in the small-scale apparatus. This heat-loss can be quantified and used to correct the small-scale pressure measurements. In some cases the heat-loss was large enough to quench reaction of lower energy samples. These results suggest that small-scale internal blast tests do correlate with their large-scale counterparts, provided that heat-loss to confinement can be measured, and that less reactive or lower energy samples are not quenched by heat-loss.

  16. Mechanisms of scaling in pattern formation

    PubMed Central

    Umulis, David M.; Othmer, Hans G.

    2013-01-01

    Many organisms and their constituent tissues and organs vary substantially in size but differ little in morphology; they appear to be scaled versions of a common template or pattern. Such scaling involves adjusting the intrinsic scale of spatial patterns of gene expression that are set up during development to the size of the system. Identifying the mechanisms that regulate scaling of patterns at the tissue, organ and organism level during development is a longstanding challenge in biology, but recent molecular-level data and mathematical modeling have shed light on scaling mechanisms in several systems, including Drosophila and Xenopus. Here, we investigate the underlying principles needed for understanding the mechanisms that can produce scale invariance in spatial pattern formation and discuss examples of systems that scale during development. PMID:24301464

  17. Invariant relationships deriving from classical scaling transformations

    SciTech Connect

    Bludman, Sidney; Kennedy, Dallas C.

    2011-04-15

    Because scaling symmetries of the Euler-Lagrange equations are generally not variational symmetries of the action, they do not lead to conservation laws. Instead, an extension of Noether's theorem reduces the equations of motion to evolutionary laws that prove useful, even if the transformations are not symmetries of the equations of motion. In the case of scaling, symmetry leads to a scaling evolutionary law, a first-order equation in terms of scale invariants, linearly relating kinematic and dynamic degrees of freedom. This scaling evolutionary law appears in dynamical and in static systems. Applied to dynamical central-force systems, the scaling evolutionary equation leads to generalized virial laws, which linearly connect the kinetic and potential energies. Applied to barotropic hydrostatic spheres, the scaling evolutionary equation linearly connects the gravitational and internal energy densities. This implies well-known properties of polytropes, describing degenerate stars and chemically homogeneous nondegenerate stellar cores.

  18. Effect of Velocity in Icing Scaling Tests

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Bond, Thomas H. (Technical Monitor)

    2003-01-01

    This paper presents additional results of a study first published in 1999 to determine the effect of scale velocity on scaled icing test results. Reference tests were made with a 53.3-cm-chord NACA 0012 airfoil model in the NASA Glenn Icing Research Tunnel at an airspeed of 67 m/s, an MVD of 40 microns, and an LWC of 0.6 g/cu m. Temperature was varied to provide nominal freezing fractions of 0.8, 0.6, and 0.5. Scale tests used both 35.6- and 27.7-cm-chord 0012 models for 2/3- and 1/2-size scaling. Scale test conditions were found using the modified Ruff (AEDC) scaling method with the scale velocity determined in five ways. Four of the scale velocities were found by matching the scale and reference values of water-film thickness, velocity, Weber number, and Reynolds number. The fifth scale velocity was simply the average of those found by matching the Weber and Reynolds numbers. The resulting scale velocities ranged from 85 to 220 percent of the reference velocity. For a freezing fraction of 0.8, the value of the scale velocity had no effect on how well the scale ice shape simulated the reference shape. For nominal freezing fractions of 0.5 and 0.6, the best simulation of the reference shape was achieved when the scale velocity was the average of the constant-Weber-number and the constant-Reynolds-number velocities.

  19. Anticipated adaptation or scale recalibration?

    PubMed Central

    2013-01-01

    Background The aim of our study was to investigate anticipated adaptation among patients in the subacute phase of Spinal Cord Injury (SCI). Methods We used an observational longitudinal design. Patients with SCI (N = 44) rated their actual, previous and expected future Quality of Life (QoL) at three time points: within two weeks of admission to the rehabilitation center (RC), a few weeks before discharge from the RC, and at least three months after discharge. We compared the expected future rating at the second time point with the actual ratings at the third time point, using student’s t-tests. To gain insight into scale recalibration we also compared actual and previous ratings. Results At the group level, patients overpredicted their improvement on the VAS. Actual health at T3(M = 0.65, sd =0.20)) was significantly lower than the predicted health at T1 of T3 (M = 0.76, sd = 0.1; t(43) = 3.24, p < 0.01), and at T2 of T3(M = 0.75,sd = 0.13; t(43) = 3.44, p < 0.001). Similarly the recalled health at T3 of T2 (M = 0.59, sd = 0.18) was significantly lower than the actual health at T2 (M = 0.67, sd = 0.15; t(43) = 3.26, p <0.01). Patients rated their future and past health inaccurately compared to their actual ratings on the VAS. In contrast, on the TTO patients gave accurate estimates of their future and previous health, and they also accurately valued their previous health. Looking at individual ratings, the number of respondents with accurate estimates of their future and previous health were similar between the VAS and TTO. However, the Bland-Altman plots show that the deviation of the accuracy is larger for the TTO then the VAS. That is the accuracy of 95% of the respondents was lower in the TTO then in the VAS. Conclusions Patients at the onset of a disability were able to anticipate adaptation. Valuations given on the VAS seem to be biased by scale recalibration. PMID:24139246

  20. Scales and scaling in turbulent ocean sciences; physics-biology coupling

    NASA Astrophysics Data System (ADS)

    Schmitt, Francois

    2015-04-01

    Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.

  1. Effect of Violating Unidimensional Item Response Theory Vertical Scaling Assumptions on Developmental Score Scales

    ERIC Educational Resources Information Center

    Topczewski, Anna Marie

    2013-01-01

    Developmental score scales represent the performance of students along a continuum, where as students learn more they move higher along that continuum. Unidimensional item response theory (UIRT) vertical scaling has become a commonly used method to create developmental score scales. Research has shown that UIRT vertical scaling methods can be…

  2. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  3. Do Plot Scale Studies Yield Useful Data When Assessing Field Scale Practices?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plot scale data has been used to develop models used to assess field and watershed scale nutrient losses. The objective of this study was to determine if phosphorus (P) loss results from plot scale rainfall simulation studies are “directionally correct” when compared to field scale P losses. Two fie...

  4. Scaling Agile Infrastructure to People

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  5. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  6. Electroweak-scale resonant leptogenesis

    SciTech Connect

    Pilaftsis, Apostolos; Underwood, Thomas E.J.

    2005-12-01

    We study minimal scenarios of resonant leptogenesis near the electroweak phase transition. These models offer a number of testable phenomenological signatures for low-energy experiments and future high-energy colliders. Our study extends previous analyses of the relevant network of Boltzmann equations, consistently taking into account effects from out of equilibrium sphalerons and single lepton flavors. We show that the effects from single lepton flavors become very important in variants of resonant leptogenesis, where the observed baryon asymmetry in the Universe is created by lepton-to-baryon conversion of an individual lepton number, for example, that of the {tau}-lepton. The predictions of such resonant {tau}-leptogenesis models for the final baryon asymmetry are almost independent of the initial lepton-number and heavy neutrino abundances. These models accommodate the current neutrino data and have a number of testable phenomenological implications. They contain electroweak-scale heavy Majorana neutrinos with appreciable couplings to electrons and muons, which can be probed at future e{sup +}e{sup -} and {mu}{sup +}{mu}{sup -} high-energy colliders. In particular, resonant {tau}-leptogenesis models predict sizable 0{nu}{beta}{beta} decay, as well as e- and {mu}-number-violating processes, such as {mu}{yields}e{gamma} and {mu}{yields}e conversion in nuclei, with rates that are within reach of the experiments proposed by the MEG and MECO collaborations.

  7. Challenges to Scaling CIGS Photovoltaics

    NASA Astrophysics Data System (ADS)

    Stanbery, B. J.

    2011-03-01

    The challenges of scaling any photovoltaic technology to terawatts of global capacity are arguably more economic than technological or resource constraints. All commercial thin-film PV technologies are based on direct bandgap semiconductors whose absorption coefficient and bandgap alignment with the solar spectrum enable micron-thick coatings in lieu to hundreds of microns required using indirect-bandgap c-Si. Although thin-film PV reduces semiconductor materials cost, its manufacture is more capital intensive than c-Si production, and proportional to deposition rate. Only when combined with sufficient efficiency and cost of capital does this tradeoff yield lower manufacturing cost. CIGS has the potential to become the first thin film technology to achieve the terawatt benchmark because of its superior conversion efficiency, making it the only commercial thin film technology which demonstrably delivers performance comparable to the dominant incumbent, c-Si. Since module performance leverages total systems cost, this competitive advantage bears directly on CIGS' potential to displace c-Si and attract the requisite capital to finance the tens of gigawatts of annual production capacity needed to manufacture terawatts of PV modules apace with global demand growth.

  8. Mirages in galaxy scaling relations

    NASA Astrophysics Data System (ADS)

    Mosenkov, A. V.; Sotnikova, N. Ya.; Reshetnikov, V. P.

    2014-06-01

    We analysed several basic correlations between structural parameters of galaxies. The data were taken from various samples in different passbands which are available in the literature. We discuss disc scaling relations as well as some debatable issues concerning the so-called Photometric Plane for bulges and elliptical galaxies in different forms and various versions of the famous Kormendy relation. We show that some of the correlations under discussion are artificial (self-correlations), while others truly reveal some new essential details of the structural properties of galaxies. Our main results are as follows: At present, we cannot conclude that faint stellar discs are, on average, more thin than discs in high surface brightness galaxies. The `central surface brightness-thickness' correlation appears only as a consequence of the transparent exponential disc model to describe real galaxy discs. The Photometric Plane appears to have no independent physical sense. Various forms of this plane are merely sophisticated versions of the Kormendy relation or of the self-relation involving the central surface brightness of a bulge/elliptical galaxy and the Sérsic index n. The Kormendy relation is a physical correlation presumably reflecting the difference in the origin of bright and faint ellipticals and bulges. We present arguments that involve creating artificial samples to prove our main idea.

  9. Entities on a Temporal Scale.

    PubMed

    Murray, Christopher M; Crother, Brian I

    2016-03-01

    Ontological understanding of biological units (i.e. what kinds of things are they) is crucial to their use in experimental design, analysis, and interpretation. Conceptualizing fundamental units in biology as individuals or classes is important for subsequent development of discovery operations. While the criteria for diagnosing individuals are acknowledged, temporal boundedness is often misinterpreted and temporal minima are applied to units in question. This results in misdiagnosis or abandonment of ontological interpretation altogether. Biological units such as areas of endemism in biogeography and species in evolutionary biology fall victim to such problems. Our goal here is to address the misconception that biological individuals such as species and areas of endemism have a temporal minimum. Areas of endemism can persist within small temporal boundaries in the context of metapopulation dynamics, island biogeography, and range expansion and contraction. Similarly, lineage reticulation illustrates examples of short-lived species. Here, examples of known entities are provided to illustrate their persistence on short time scales in attempt to rescue future interpretation of biological units from ontological misdiagnosis, elucidate the philosophical individuality of areas of endemism and species with short lifespans, and provide justification for the "snapshot in time" diagnostic approach. PMID:26342483

  10. Harmonic regression and scale stability.

    PubMed

    Lee, Yi-Hsuan; Haberman, Shelby J

    2013-10-01

    Monitoring a very frequently administered educational test with a relatively short history of stable operation imposes a number of challenges. Test scores usually vary by season, and the frequency of administration of such educational tests is also seasonal. Although it is important to react to unreasonable changes in the distributions of test scores in a timely fashion, it is not a simple matter to ascertain what sort of distribution is really unusual. Many commonly used approaches for seasonal adjustment are designed for time series with evenly spaced observations that span many years and, therefore, are inappropriate for data from such educational tests. Harmonic regression, a seasonal-adjustment method, can be useful in monitoring scale stability when the number of years available is limited and when the observations are unevenly spaced. Additional forms of adjustments can be included to account for variability in test scores due to different sources of population variations. To illustrate, real data are considered from an international language assessment. PMID:24092490

  11. Universal scaling in sports ranking

    NASA Astrophysics Data System (ADS)

    Deng, Weibing; Li, Wei; Cai, Xu; Bulou, Alain; Wang, Qiuping A.

    2012-09-01

    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters.

  12. A small scale honey dehydrator.

    PubMed

    Gill, R S; Hans, V S; Singh, Sukhmeet; Pal Singh, Parm; Dhaliwal, S S

    2015-10-01

    A small scale honey dehydrator has been designed, developed, and tested to reduce moisture content of honey below 17 %. Experiments have been conducted for honey dehydration by using drying air at ambient temperature, 30 and 40 °C and water at 35, 40 and 45 °C. In this dehydrator, hot water has been circulated in a water jacket around the honey container to heat honey. The heated honey has been pumped through a sieve to form honey streams through which drying air passes for moisture removal. The honey streams help in increasing the exposed surface area of honey in contact with drying air, thus resulting in faster dehydration of honey. The maximum drying rate per square meter area of honey exposed to drying air was found to be 197.0 g/h-m(2) corresponding to the drying air and water temperature of 40 and 45 °C respectively whereas it was found to be minimum (74.8 g/h-m(2)) corresponding to the drying air at ambient temperature (8-17 °C) and water at 35 °C. The energy cost of honey moisture content reduction from 25.2 to 16.4 % was Rs. 6.20 to Rs. 17.36 (US $ 0.10 to US $ 0.28 (One US $ = 62.00 Indian Rupee on February, 2014) per kilogram of honey. PMID:26396418

  13. Meso-scale machining capabilities and issues

    SciTech Connect

    BENAVIDES,GILBERT L.; ADAMS,DAVID P.; YANG,PIN

    2000-05-15

    Meso-scale manufacturing processes are bridging the gap between silicon-based MEMS processes and conventional miniature machining. These processes can fabricate two and three-dimensional parts having micron size features in traditional materials such as stainless steels, rare earth magnets, ceramics, and glass. Meso-scale processes that are currently available include, focused ion beam sputtering, micro-milling, micro-turning, excimer laser ablation, femto-second laser ablation, and micro electro discharge machining. These meso-scale processes employ subtractive machining technologies (i.e., material removal), unlike LIGA, which is an additive meso-scale process. Meso-scale processes have different material capabilities and machining performance specifications. Machining performance specifications of interest include minimum feature size, feature tolerance, feature location accuracy, surface finish, and material removal rate. Sandia National Laboratories is developing meso-scale electro-mechanical components, which require meso-scale parts that move relative to one another. The meso-scale parts fabricated by subtractive meso-scale manufacturing processes have unique tribology issues because of the variety of materials and the surface conditions produced by the different meso-scale manufacturing processes.

  14. Further evaluation of traditional icing scaling methods

    NASA Technical Reports Server (NTRS)

    Anderson, David N.

    1996-01-01

    This report provides additional evaluations of two methods to scale icing test conditions; it also describes a hybrid technique for use when scaled conditions are outside the operating envelope of the test facility. The first evaluation is of the Olsen method which can be used to scale the liquid-water content in icing tests, and the second is the AEDC (Ruff) method which is used when the test model is less than full size. Equations for both scaling methods are presented in the paper, and the methods were evaluated by performing icing tests in the NASA Lewis Icing Research Tunnel (IRT). The Olsen method was tested using 53 cm diameter NACA 0012 airfoils. Tests covered liquid-water-contents which varied by as much as a factor of 1.8. The Olsen method was generally effective in giving scale ice shapes which matched the reference shapes for these tests. The AEDC method was tested with NACA 0012 airfoils with chords from 18 cm to 53 cm. The 53 cm chord airfoils were used in reference tests, and 1/2 and 1/3 scale tests were made at conditions determined by applying the AEDC scaling method. The scale and reference airspeeds were matched in these tests. The AEDC method was found to provide fairly effective scaling for 1/2 size tests, but for 1/3 size models, scaling was generally less effective. In addition to these two scaling methods, a hybrid approach was also tested in which the Olsen method was used to adjust the LWC after size was scaled using the constant Weber number method. This approach was found to be an effective way to test when scaled conditions would otherwise be outside the capability of the test facility.

  15. Thermodynamic scaling behavior in genechips

    PubMed Central

    Ferrantini, Alessandro; Allemeersch, Joke; Van Hummelen, Paul; Carlon, Enrico

    2009-01-01

    Background Affymetrix Genechips are characterized by probe pairs, a perfect match (PM) and a mismatch (MM) probe differing by a single nucleotide. Most of the data preprocessing algorithms neglect MM signals, as it was shown that MMs cannot be used as estimators of the non-specific hybridization as originally proposed by Affymetrix. The aim of this paper is to study in detail on a large number of experiments the behavior of the average PM/MM ratio. This is taken as an indicator of the quality of the hybridization and, when compared between different chip series, of the quality of the chip design. Results About 250 different GeneChip hybridizations performed at the VIB Microarray Facility for Homo sapiens, Drosophila melanogaster, and Arabidopsis thaliana were analyzed. The investigation of such a large set of data from the same source minimizes systematic experimental variations that may arise from differences in protocols or from different laboratories. The PM/MM ratios are derived theoretically from thermodynamic laws and a link is made with the sequence of PM and MM probe, more specifically with their central nucleotide triplets. Conclusion The PM/MM ratios subdivided according to the different central nucleotides triplets follow qualitatively those deduced from the hybridization free energies in solution. It is shown also that the PM and MM histograms are related by a simple scale transformation, in agreement with what is to be expected from hybridization thermodynamics. Different quantitative behavior is observed on the different chip organisms analyzed, suggesting that some organism chips have superior probe design compared to others. PMID:19123958

  16. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  17. A framework for multi-scale modelling

    PubMed Central

    Chopard, B.; Borgdorff, Joris; Hoekstra, A. G.

    2014-01-01

    We review a methodology to design, implement and execute multi-scale and multi-science numerical simulations. We identify important ingredients of multi-scale modelling and give a precise definition of them. Our framework assumes that a multi-scale model can be formulated in terms of a collection of coupled single-scale submodels. With concepts such as the scale separation map, the generic submodel execution loop (SEL) and the coupling templates, one can define a multi-scale modelling language which is a bridge between the application design and the computer implementation. Our approach has been successfully applied to an increasing number of applications from different fields of science and technology. PMID:24982249

  18. Mobility at the scale of meters.

    PubMed

    Surovell, Todd A; O'Brien, Matthew

    2016-05-01

    When archeologists discuss mobility, we are most often referring to a phenomenon that operates on the scale of kilometers, but much of human mobility, at least if measured in terms of frequency of movement, occurs at much smaller scales, ranging from centimeters to tens of meters. Here we refer to the movements we make within the confines of our homes or places of employment. With respect to nomadic peoples, movements at this scale would include movements within campsites. Understanding mobility at small scales is important to archeology because small-scale mobility decisions are a critical factor affecting spatial patterning observed in archeological sites. In this paper, we examine the factors affecting small-scale mobility decisions in a Mongolian reindeer herder summer camp and the implications of those decisions with regard to archeological spatial patterning. PMID:27312186

  19. A cumulative scale of severe sexual sadism.

    PubMed

    Nitschke, Joachim; Osterheider, Michael; Mokros, Andreas

    2009-09-01

    The article assesses the scale properties of the criterion set for severe sexual sadism in a sample of male forensic patients (N = 100). Half of the sample consists of sexual sadists; the remainder is sampled at random from the general group of nonsadistic sex offenders. Eleven of 17 criteria (plus the additional item of inserting objects into the victim's bodily orifices) of Marshall, Kennedy, Yates, and Serran's list form a cumulative scale. More specifically, this scale comprises all the 5 core criteria that Marshall and his colleagues considered particularly relevant. The resulting 11-item scale of severe sexual sadism is highly reliable (r(tt) = .93) and represents a strong scale (H = .83) of the Guttman type (coefficient of reproducibility = .97). The 11-item scale distinguishes perfectly between sexual sadists and nonsadistic sex offenders in the sample. PMID:19605691

  20. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  1. Small scale structure on cosmic strings

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1989-01-01

    The current understanding of cosmic string evolution is discussed, and the focus placed on the question of small scale structure on strings, where most of the disagreements lie. A physical picture designed to put the role of the small scale structure into more intuitive terms is presented. In this picture it can be seen how the small scale structure can feed back in a major way on the overall scaling solution. It is also argued that it is easy for small scale numerical errors to feed back in just such a way. The intuitive discussion presented here may form the basis for an analytic treatment of the small scale structure, which argued in any case would be extremely valuable in filling the gaps in the present understanding of cosmic string evolution.

  2. Gap Test Calibrations and Their Scaling

    NASA Astrophysics Data System (ADS)

    Sandusky, Harold

    2011-06-01

    Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations with water gaps will be provided and compared with PMMA gaps. Scaling for other donor systems will also be provided. Shock initiation data with water gaps will be reviewed.

  3. Factors Affecting Scale Adhesion on Steel Forgings

    NASA Astrophysics Data System (ADS)

    Zitterman, J. A.; Bacco, R. P.; Boggs, W. E.

    1982-04-01

    Occasionally, undesirable "sticky" adherent scale forms on low-carbon steel during reheating for hot forging. The mechanical abrading or chemical pickling required to remove this scale adds appreciably to the fabrication cost. Characterization of the steel-scale system by metallographic examination, x-ray diffraction, and electron-probe microanalysis revealed that nickel, silicon, and/or sulfur might be involved in the mechanism of sticky-scale formation. Laboratory reheating tests were conducted on steels with varied concentrations of nickel and silicon in atmospheres simulating those resulting from burning natural gas or sulfur-bearing fuels. Subsequent characterization of the scale formed during the tests tends to confirm that the composition of the steel, especially increased nickel and silicon contents, and the presence of the sulfur in the furnace atmosphere cause the formation of this undesirable scale.

  4. Adolescent coping scales: a critical psychometric review.

    PubMed

    Sveinbjornsdottir, Sigrun; Thorsteinsson, Einar Baldvin

    2008-12-01

    Individual coping is identified as an important factor in relation to health and well-being. Although several coping scales have been developed, key terms of coping such as nature and a number of primary and secondary factors (dimensions) are obscure. Coping scales, such as those that have been developed through exploratory factor analysis (EFA), have been criticized for poor psychometric properties, yet the critique so far does not evaluate development of the scales against best test-theoretical practice. The present study reviews six adolescent coping scales against ten detailed psychometric criteria in relation to statistical choices throughout the process of scale development. All six scales measured poorly on several criteria. Best practice had not been followed throughout their development and they suffered serious psychometric limitations. These findings indicate that there still is empirical research to be pursued in search of latent constructs and possible dimensions of coping through the implementation of EFA. PMID:18489531

  5. 33 CFR 157.104 - Scale models.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Scale models. 157.104 Section 157... Oil Washing (COW) System on Tank Vessels General § 157.104 Scale models. If the pattern under § 157.100(a)(4) or § 157.102(d) cannot be shown on a plan, a scale model of each tank must be built...

  6. 33 CFR 157.104 - Scale models.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Scale models. 157.104 Section 157... Oil Washing (COW) System on Tank Vessels General § 157.104 Scale models. If the pattern under § 157.100(a)(4) or § 157.102(d) cannot be shown on a plan, a scale model of each tank must be built...

  7. 33 CFR 157.104 - Scale models.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Scale models. 157.104 Section 157... Oil Washing (COW) System on Tank Vessels General § 157.104 Scale models. If the pattern under § 157.100(a)(4) or § 157.102(d) cannot be shown on a plan, a scale model of each tank must be built...

  8. Acoustical scale modeling of roadway traffic noise

    SciTech Connect

    Anderson, G.S.

    1980-03-01

    During the planning and design of any federally assisted highway project, noise levels must be predicted for the highway in its operational mode. The use of an acoustical scale modeling technique to predict roadway traffic noise is described. Literature pertaining to acoustical scale modeling of outdoor noise propagation, particularly roadway noise, is reviewed. Field and laboratory measurements validated the predictions of the acoustical scale modeling technique. (1 photo)

  9. Scaling and Universality in Rock Fracture

    SciTech Connect

    Davidsen, Joern; Stanchits, Sergei; Dresen, Georg

    2007-03-23

    We present a detailed statistical analysis of acoustic emission time series from laboratory rock fracture obtained from different experiments on different materials including acoustic emission controlled triaxial fracture and punch-through tests. In all considered cases, the waiting time distribution can be described by a unique scaling function indicating its universality. This scaling function is even indistinguishable from that for earthquakes suggesting its general validity for fracture processes independent of time, space, and magnitude scales.

  10. [Psychometric properties of a scale: internal consistency].

    PubMed

    Campo-Arias, Adalberto; Oviedo, Heidi C

    2008-01-01

    Internal consistency reliability is the degree of correlation between a scale's items. Internal consistency is calculated by Kuder-Richardson's formula 20 for dichotomous choices and Cronbach's alpha for polytomous items. 0.70 to 0.90 internal consistency is acceptable. 5-25 participants are needed for each item when computing the internal consistency of a twenty-item scale. Internal consistency varies according to population and then it is necessary to report it always that scale is used. PMID:19360231

  11. How to calibrate the jet energy scale?

    SciTech Connect

    Hatakeyama, K.; /Rockefeller U.

    2006-01-01

    Top quarks dominantly decay into b-quark jets and W bosons, and the W bosons often decay into jets, thus the precise determination of the jet energy scale is crucial in measurements of many top quark properties. I present the strategies used by the CDF and D0 collaborations to determine the jet energy scale. The various cross checks performed to verify the determined jet energy scale and evaluate its systematic uncertainty are also discussed.

  12. Chirp Scaling Algorithms for SAR Processing

    NASA Technical Reports Server (NTRS)

    Jin, M.; Cheng, T.; Chen, M.

    1993-01-01

    The chirp scaling SAR processing algorithm is both accurate and efficient. Successful implementation requires proper selection of the interval of output samples, which is a function of the chirp interval, signal sampling rate, and signal bandwidth. Analysis indicates that for both airborne and spaceborne SAR applications in the slant range domain a linear chirp scaling is sufficient. To perform nonlinear interpolation process such as to output ground range SAR images, one can use a nonlinear chirp scaling interpolator presented in this paper.

  13. Scaling Rules for Pre-Injector Design

    SciTech Connect

    Tom Schwarz; Dan Amidei

    2003-07-13

    Proposed designs of the prebunching system of the NLC and TESLA are based on the assumption that scaling the SLC design to NLC/TESLA requirements should provide the desired performance. A simple equation is developed to suggest a scaling rule in terms of bunch charge and duration. Detailed simulations of prebunching systems scaled from a single design have been run to investigate these issues.

  14. Scaling and Single Event Effects (SEE) Sensitivity

    NASA Technical Reports Server (NTRS)

    Oldham, Timothy R.

    2003-01-01

    This paper begins by discussing the potential for scaling down transistors and other components to fit more of them on chips in order to increasing computer processing speed. It also addresses technical challenges to further scaling. Components have been scaled down enough to allow single particles to have an effect, known as a Single Event Effect (SEE). This paper explores the relationship between scaling and the following SEEs: Single Event Upsets (SEU) on DRAMs and SRAMs, Latch-up, Snap-back, Single Event Burnout (SEB), Single Event Gate Rupture (SEGR), and Ion-induced soft breakdown (SBD).

  15. Time-dependent corona models - Scaling laws

    NASA Technical Reports Server (NTRS)

    Korevaar, P.; Martens, P. C. H.

    1989-01-01

    Scaling laws are derived for the one-dimensional time-dependent Euler equations that describe the evolution of a spherically symmetric stellar atmosphere. With these scaling laws the results of the time-dependent calculations by Korevaar (1989) obtained for one star are applicable over the whole Hertzsprung-Russell diagram and even to elliptic galaxies. The scaling is exact for stars with the same M/R-ratio and a good approximation for stars with a different M/R-ratio. The global relaxation oscillation found by Korevaar (1989) is scaled to main sequence stars, a solar coronal hole, cool giants and elliptic galaxies.

  16. Improving the Factor Structure of Psychological Scales

    PubMed Central

    Zhang, Xijuan; Savalei, Victoria

    2015-01-01

    Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format, which replaces each response option in the Likert scale with a full sentence. We hypothesized that this format would result in a cleaner factor structure as compared with the Likert format. We tested this hypothesis on three popular psychological scales: the Rosenberg Self-Esteem scale, the Conscientiousness subscale of the Big Five Inventory, and the Beck Depression Inventory II. Scales in both formats showed comparable reliabilities. However, scales in the Expanded format had better (i.e., lower and more theoretically defensible) dimensionalities than scales in the Likert format, as assessed by both exploratory factor analyses and confirmatory factor analyses. We encourage further study and wider use of the Expanded format, particularly when a scale’s dimensionality is of theoretical interest. PMID:27182074

  17. A Fractal Perspective on Scale in Geography

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Brandt, S.

    2016-06-01

    Scale is a fundamental concept that has attracted persistent attention in geography literature over the past several decades. However, it creates enormous confusion and frustration, particularly in the context of geographic information science, because of scale-related issues such as image resolution, and the modifiable areal unit problem (MAUP). This paper argues that the confusion and frustration mainly arise from Euclidean geometric thinking, with which locations, directions, and sizes are considered absolute, and it is time to reverse this conventional thinking. Hence, we review fractal geometry, together with its underlying way of thinking, and compare it to Euclidean geometry. Under the paradigm of Euclidean geometry, everything is measurable, no matter how big or small. However, geographic features, due to their fractal nature, are essentially unmeasurable or their sizes depend on scale. For example, the length of a coastline, the area of a lake, and the slope of a topographic surface are all scale-dependent. Seen from the perspective of fractal geometry, many scale issues, such as the MAUP, are inevitable. They appear unsolvable, but can be dealt with. To effectively deal with scale-related issues, we introduce topological and scaling analyses based on street-related concepts such as natural streets, street blocks, and natural cities. We further contend that spatial heterogeneity, or the fractal nature of geographic features, is the first and foremost effect of two spatial properties, because it is general and universal across all scales. Keywords: Scaling, spatial heterogeneity, conundrum of length, MAUP, topological analysis

  18. A numerical exercise in musical scales

    NASA Astrophysics Data System (ADS)

    Hartmann, George C.

    1987-03-01

    This paper investigates why the 12-note scale, having equal intervals, seems to be the best representation of scales constructed from purely harmonic intervals. Is it possible that other equal temperament scales with more or less than 12 notes would serve just as well? The investigation is done by displaying the difference between a set of harmonic notes and scales with equal intervals having n notes per octave. The difference is small when n is equal to 12, but also when n equals 19 and 29. The number density of notes per unit frequency intervals is also investigated.

  19. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  20. Cosmic string scaling in flat space

    SciTech Connect

    Vanchurin, Vitaly; Olum, Ken; Vilenkin, Alexander

    2005-09-15

    We investigate the evolution of infinite strings as a part of a complete cosmic string network in flat space. We perform a simulation of the network which uses functional forms for the string position and thus is exact to the limits of computer arithmetic. Our results confirm that the wiggles on the strings obey a scaling law described by universal power spectrum. The average distance between long strings also scales accurately with the time. These results suggest that small-scale structure will also scale in an expanding universe, even in the absence of gravitational damping.

  1. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1965-01-01

    Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.

  2. Universal scaling behaviour in weighted trade networks

    NASA Astrophysics Data System (ADS)

    Duan, W. Q.

    2007-09-01

    Identifying universal patterns in complex economic systems can reveal the dynamics and organizing principles underlying the process of system evolution. We investigate the scaling behaviours that have emerged in the international trade system by describing them as a series of evolving weighted trade networks. The maximum-flow spanning trees (constructed by maximizing the total weight of the edges) of these networks exhibit two universal scaling exponents: (1) topological scaling exponent η = 1.30 and (2) flow scaling exponent zeta = 1.03.

  3. Euthanasia attitude; A comparison of two scales

    PubMed Central

    Aghababaei, Naser; Farahani, Hojjatollah; Hatami, Javad

    2011-01-01

    The main purposes of the present study were to see how the term “euthanasia” influences people’s support for or opposition to euthanasia; and to see how euthanasia attitude relates to religious orientation and personality factors. In this study two different euthanasia attitude scales were compared. 197 students were selected to fill out either the Euthanasia Attitude Scale (EAS) or Wasserman’s Attitude Towards Euthanasia scale (ATE scale). The former scale includes the term “euthanasia”, the latter does not. All participants filled out 50 items of International Personality Item Pool, 16 items of the the HEXACO openness, and 14 items of Religious Orientation Scale-Revised. Results indicated that even though the two groups were not different in terms of gender, age, education, religiosity and personality, mean score on the ATE scale was significantly higher than that of the EAS. Euthanasia attitude was negatively correlated with religiosity and conscientiousness and it was positively correlated with psychoticism and openness. It can be concluded that analyzing the attitude towards euthanasia with the use of EAS rather than the ATE scale results in lower levels of opposition against euthanasia. This study raises the question of whether euthanasia attitude scales should contain definitions and concepts of euthanasia or they should describe cases of it. PMID:23908751

  4. Dense Correspondences across Scenes and Scales.

    PubMed

    Tau, Moria; Hassner, Tal

    2016-05-01

    We seek a practical method for establishing dense correspondences between two images with similar content, but possibly different 3D scenes. One of the challenges in designing such a system is the local scale differences of objects appearing in the two images. Previous methods often considered only few image pixels; matching only pixels for which stable scales may be reliably estimated. Recently, others have considered dense correspondences, but with substantial costs associated with generating, storing and matching scale invariant descriptors. Our work is motivated by the observation that pixels in the image have contexts-the pixels around them-which may be exploited in order to reliably estimate local scales. We make the following contributions. (i) We show that scales estimated in sparse interest points may be propagated to neighboring pixels where this information cannot be reliably determined. Doing so allows scale invariant descriptors to be extracted anywhere in the image. (ii) We explore three means for propagating this information: using the scales at detected interest points, using the underlying image information to guide scale propagation in each image separately, and using both images together. Finally, (iii), we provide extensive qualitative and quantitative results, demonstrating that scale propagation allows for accurate dense correspondences to be obtained even between very different images, with little computational costs beyond those required by existing methods. PMID:26336115

  5. Small scale bipolar nickel-hydrogen testing

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle A.

    1988-01-01

    Bipolar nickel-hydrogen batteries, ranging in capacity from 6 to 40 A-hr, have been tested at the NASA Lewis Research Center over the past six years. Small scale tests of 1 A-hr nickel-hydrogen stacks have been initiated as a means of screening design and component variations for bipolar nickel-hydrogen cells and batteries. Four small-scale batteries have been built and tested. Characterization and limited cycle testing were performed to establish the validity of test results in the scaled down hardware. The results show characterization test results to be valid. LEO test results in the small scale hardware have limited value.

  6. Convergent validity of the MCMI-III personality disorder scales and the MMPI-2 scales.

    PubMed

    Rossi, Gina; Van den Brande, Iris; Tobac, An; Sloore, Hedwig; Hauben, Claudia

    2003-08-01

    The MCMI-III personality disorder scales (Millon, 1994) were empirically validated in a sample of prisoners, psychiatric inpatients, and outpatients (N = 477). The scale intercorrelations were congruent with those obtained by Millon, Davis, and Millon (1997). We conclude that our Flemish/Dutch version shows no significant differences with the original version of the MCMI-III as far as intercorrelations are concerned. Convergent validity of the MCMI-III personality disorder scales was evaluated by the correlational data between the MCMI-III personality disorder scales and the MMPI-2 clinical (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) and personality disorder (Somwaru & Ben-Porath, 1995) scales. Improved convergence was obtained compared with previous versions of the MCMI-I. Only the compulsive MCMI-III personality disorder scale remains problematic. The scale even showed negative correlations with some of the related clinical scales and with the corresponding personality disorder scales of the MMPI-2. PMID:14521181

  7. Cross-Scale Interactions between Electron and Ion Scale Turbulence in a Tokamak Plasma.

    PubMed

    Maeyama, S; Idomura, Y; Watanabe, T-H; Nakata, M; Yagi, M; Miyato, N; Ishizawa, A; Nunami, M

    2015-06-26

    Multiscale gyrokinetic turbulence simulations with the real ion-to-electron mass ratio and β value are realized for the first time, where the β value is given by the ratio of plasma pressure to magnetic pressure and characterizes electromagnetic effects on microinstabilities. Numerical analysis at both the electron scale and the ion scale is used to reveal the mechanism of their cross-scale interactions. Even with the real-mass scale separation, ion-scale turbulence eliminates electron-scale streamers and dominates heat transport, not only of ions but also of electrons. Suppression of electron-scale turbulence by ion-scale eddies, rather than by long-wavelength zonal flows, is also demonstrated by means of direct measurement of nonlinear mode-to-mode coupling. When the ion-scale modes are stabilized by finite-β effects, the contribution of the electron-scale dynamics to the turbulent transport becomes non-negligible and turns out to enhance ion-scale turbulent transport. Damping of the ion-scale zonal flows by electron-scale turbulence is responsible for the enhancement of ion-scale transport. PMID:26197130

  8. Scale-up studies on high shear wet granulation process from mini-scale to commercial scale.

    PubMed

    Aikawa, Shouhei; Fujita, Naomi; Myojo, Hidetoshi; Hayashi, Takashi; Tanino, Tadatsugu

    2008-10-01

    A newly developed mini-scale high shear granulator was used for scale-up study of wet granulation process from 0.2 to 200 L scales. Under various operation conditions and granulation bowl sizes, powder mixture composed of anhydrous caffeine, D-mannitol, dibasic calcium phosphate, pregelatinized starch and corn starch was granulated by adding water. The granules were tabletted, and disintegration time and hardness of the tablets were evaluated to seek correlations of granulation conditions and tablet properties. As the granulation proceeded, disintegration time was prolonged and hardness decreased. When granulation processes were operated under the condition that agitator tip speed was the same, similar relationship between granulation time and tablet properties, such as disintegration time and hardness, between 0.2 L and 11 L scales were observed. Likewise, between 11 L and 200 L scales similar relationship was observed when operated under the condition that the force to the granulation mass was the same. From the above results, the mini-scale high shear granulator should be useful tool to predict operation conditions of large-scale granulation from its mini-scale operation conditions, where similar tablet properties should be obtained. PMID:18827384

  9. X-RAY-EMITTING STARS IDENTIFIED FROM THE ROSAT ALL-SKY SURVEY AND THE SLOAN DIGITAL SKY SURVEY

    SciTech Connect

    Agueeros, Marcel A.; Newsom, Emily R.; Anderson, Scott F.; Hawley, Suzanne L.; Silvestri, Nicole M.; Szkody, Paula; Covey, Kevin R.; Posselt, Bettina; Margon, Bruce; Voges, Wolfgang

    2009-04-15

    The ROSAT All-Sky Survey (RASS) was the first imaging X-ray survey of the entire sky. Combining the RASS Bright and Faint Source Catalogs yields an average of about three X-ray sources per square degree. However, while X-ray source counterparts are known to range from distant quasars to nearby M dwarfs, the RASS data alone are often insufficient to determine the nature of an X-ray source. As a result, large-scale follow-up programs are required to construct samples of known X-ray emitters. We use optical data produced by the Sloan Digital Sky Survey (SDSS) to identify 709 stellar X-ray emitters cataloged in the RASS and falling within the SDSS Data Release 1 footprint. Most of these are bright stars with coronal X-ray emission unsuitable for SDSS spectroscopy, which is designed for fainter objects (g > 15 [mag]). Instead, we use SDSS photometry, correlations with the Two Micron All Sky Survey and other catalogs, and spectroscopy from the Apache Point Observatory 3.5 m telescope to identify these stellar X-ray counterparts. Our sample of 707 X-ray-emitting F, G, K, and M stars is one of the largest X-ray-selected samples of such stars. We derive distances to these stars using photometric parallax relations appropriate for dwarfs on the main sequence, and use these distances to calculate L{sub X} . We also identify a previously unknown cataclysmic variable (CV) as a RASS counterpart. Separately, we use correlations of the RASS and the SDSS spectroscopic catalogs of CVs and white dwarfs (WDs) to study the properties of these rarer X-ray-emitting stars. We examine the relationship between (f{sub X} /f{sub g} ) and the equivalent width of the H{beta} emission line for 46 X-ray-emitting CVs and discuss tentative classifications for a subset based on these quantities. We identify 17 new X-ray-emitting DA (hydrogen) WDs, of which three are newly identified WDs. We report on follow-up observations of three candidate cool X-ray-emitting WDs (one DA and two DB (helium) WDs

  10. Functional nanometer-scale structures

    NASA Astrophysics Data System (ADS)

    Chan, Tsz On Mario

    Nanometer-scale structures have properties that are fundamentally different from their bulk counterparts. Much research effort has been devoted in the past decades to explore new fabrication techniques, model the physical properties of these structures, and construct functional devices. The ability to manipulate and control the structure of matter at the nanoscale has made many new classes of materials available for the study of fundamental physical processes and potential applications. The interplay between fabrication techniques and physical understanding of the nanostructures and processes has revolutionized the physical and material sciences, providing far superior properties in materials for novel applications that benefit society. This thesis consists of two major aspects of my graduate research in nano-scale materials. In the first part (Chapters 3--6), a comprehensive study on the nanostructures based on electrospinning and thermal treatment is presented. Electrospinning is a well-established method for producing high-aspect-ratio fibrous structures, with fiber diameter ranging from 1 nm--1 microm. A polymeric solution is typically used as a precursor in electrospinning. In our study, the functionality of the nanostructure relies on both the nanostructure and material constituents. Metallic ions containing precursors were added to the polymeric precursor following a sol-gel process to prepare the solution suitable for electrospinning. A typical electrospinning process produces as-spun fibers containing both polymer and metallic salt precursors. Subsequent thermal treatments of the as-spun fibers were carried out in various conditions to produce desired structures. In most cases, polymer in the solution and the as-spun fibers acted as a backbone for the structure formation during the subsequent heat treatment, and were thermally removed in the final stage. Polymers were also designed to react with the metallic ion precursors during heat treatment in some

  11. ROSAT x ray survey observations of active chromospheric binary systems and other selected sources

    NASA Technical Reports Server (NTRS)

    Ramsey, Lawrence W.

    1993-01-01

    The connection between processes that produce optical chromospheric activity indicators and those that produce x-rays in RS CVn binary systems by taking advantage of the ROSAT All-Sky Survey (RASS) results and our unique ground-based data set was investigated. In RS CVn systems, excess emission in the Ca 2 resonance (K & H) and infrared triplet (IRT) lines and in the Balmer lines of hydrogen is generally cited as evidence for chromospheric activity, which is usually modeled as scaled up solar-type activity. X-ray emission in RS CVn systems is believed to arise from coronal loop structures. Results from spectra data obtained from RASS observations are discussed and presented.

  12. The Chinese version of the Myocardial Infarction Dimensional Assessment Scale (MIDAS): Mokken scaling

    PubMed Central

    2012-01-01

    Background Hierarchical scales are very useful in clinical practice due to their ability to discriminate precisely between individuals, and the original English version of the Myocardial Infarction Dimensional Assessment Scale has been shown to contain a hierarchy of items. The purpose of this study was to analyse a Mandarin Chinese translation of the Myocardial Infarction Dimensional Assessment Scale for a hierarchy of items according to the criteria of Mokken scaling. Data from 180 Chinese participants who completed the Chinese translation of the Myocardial Infarction Dimensional Assessment Scale were analysed using the Mokken Scaling Procedure and the 'R' statistical programme using the diagnostics available in these programmes. Correlation between Mandarin Chinese items and a Chinese translation of the Short Form (36) Health Survey was also analysed. Findings Fifteen items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale were retained in a strong and reliable Mokken scale; invariant item ordering was not evident and the Mokken scaled items of the Chinese Myocardial Infarction Dimensional Assessment Scale correlated with the Short Form (36) Health Survey. Conclusions Items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale form a Mokken scale and this offers further insight into how the items of the Myocardial Infarction Dimensional Assessment Scale relate to the measurement of health-related quality of life people with a myocardial infarction. PMID:22221696

  13. Microcounseling Skill Discrimination Scale: A Methodological Note

    ERIC Educational Resources Information Center

    Stokes, Joseph; Romer, Daniel

    1977-01-01

    Absolute ratings on the Microcounseling Skill Discrimination Scale (MSDS) confound the individual's use of the rating scale and actual ability to discriminate effective and ineffective counselor behaviors. This note suggests methods of scoring the MSDS that will eliminate variability attributable to response language and improve the validity of…

  14. Scale effect on unsteady cloud cavitation

    NASA Astrophysics Data System (ADS)

    Dular, M.; Khlifa, I.; Fuzier, S.; Adama Maiga, M.; Coutier-Delgosha, O.

    2012-11-01

    No experiment was conducted, yet, to investigate the scale effects on the dynamics of developed cavitating flow with periodical cloud shedding. The present study was motivated by the unclear results obtained from the experiments in a Venturi-type section that was scaled down 10 times for the purpose of measurements by ultra-fast X-ray imaging (Coutier-Delgosha et al. 2009). Cavitation in the original size scale section (Stutz and Reboud in Exp Fluids 23:191-198, 1997, Exp Fluids 29:545-552 2000) always displays unsteady cloud separation. However, when the geometry was scaled down, the cavitation became quasi steady although some oscillations still existed. To investigate this phenomenon more in detail, experiments were conducted in six geometrically similar Venturi test sections where either width or height or both were scaled. Various types of instabilities are obtained, from simple oscillations of the sheet cavity length to large vapor cloud shedding when the size of the test section is increased. It confirms that small scale has a significant influence on cavitation. Especially the height of the test section plays a major role in the dynamics of the re-entrant jet that drives the periodical shedding observed at large scale. Results suggest that the sheet cavity becomes stabile when the section is scaled down to a certain point because re-entrant jet cannot fully develop.

  15. Scaling laws predict global microbial diversity.

    PubMed

    Locey, Kenneth J; Lennon, Jay T

    2016-05-24

    Scaling laws underpin unifying theories of biodiversity and are among the most predictively powerful relationships in biology. However, scaling laws developed for plants and animals often go untested or fail to hold for microorganisms. As a result, it is unclear whether scaling laws of biodiversity will span evolutionarily distant domains of life that encompass all modes of metabolism and scales of abundance. Using a global-scale compilation of ∼35,000 sites and ∼5.6⋅10(6) species, including the largest ever inventory of high-throughput molecular data and one of the largest compilations of plant and animal community data, we show similar rates of scaling in commonness and rarity across microorganisms and macroscopic plants and animals. We document a universal dominance scaling law that holds across 30 orders of magnitude, an unprecedented expanse that predicts the abundance of dominant ocean bacteria. In combining this scaling law with the lognormal model of biodiversity, we predict that Earth is home to upward of 1 trillion (10(12)) microbial species. Microbial biodiversity seems greater than ever anticipated yet predictable from the smallest to the largest microbiome. PMID:27140646

  16. Developing a Scale for Learner Autonomy Support

    ERIC Educational Resources Information Center

    Oguz, Aytunga

    2013-01-01

    The aim of the present study is to develop a scale to determine how necessary the primary and secondary school teachers view the learner autonomy support behaviours and how much they perform these behaviours. The study group was composed of 324 primary and secondary school teachers. The process of developing the scale involved a literature scan,…

  17. Uncertainty Consideration in Watershed Scale Models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Watershed scale hydrologic and water quality models have been used with increasing frequency to devise alternative pollution control strategies. With recent reenactment of the 1972 Clean Water Act’s TMDL (total maximum daily load) component, some of the watershed scale models are being recommended ...

  18. Common Themes Among Morale and Depression Scales

    ERIC Educational Resources Information Center

    Morris, John N.; And Others

    1975-01-01

    Reports on the intra- and interbattery scaling of three morale and depression batteries comprised of self-reported items: PGC, G-H, and ZUNG. Responses to the three scales were sought from a sample of long-term residents of a state mental hospital. (Author)

  19. Test Review: Autism Spectrum Rating Scales

    ERIC Educational Resources Information Center

    Simek, Amber N.; Wahlberg, Andrea C.

    2011-01-01

    This article reviews Autism Spectrum Rating Scales (ASRS) which are designed to measure behaviors in children between the ages of 2 and 18 that are associated with disorders on the autism spectrum as rated by parents/caregivers and/or teachers. The rating scales include items related to behaviors associated with Autism, Asperger's Disorder, and…

  20. Broken Scale Invariance and Anomalous Dimensions

    DOE R&D Accomplishments Database

    Wilson, K. G.

    1970-05-01

    Mack and Kastrup have proposed that broken scale invariance is a symmetry of strong interactions. There is evidence from the Thirring model and perturbation theory that the dimensions of fields defined by scale transformations will be changed by the interaction from their canonical values. We review these ideas and their consequences for strong interactions.

  1. Predicting Prison Adjustment with MMPI Correctional Scales.

    ERIC Educational Resources Information Center

    Megargee, Edwin I.; Carbonell, Joyce L.

    1985-01-01

    Investigated the degree to which eight Minnesota Multiphasic Personality Inventory scales, specifically derived to assess correctional criteria, related to six criteria of subsequent adjustment in prison. Although some statistically significant correlations with the criteria were obtained, their magnitude was quite low, indicating the scales had…

  2. Further Development of the ICIDH Scales.

    ERIC Educational Resources Information Center

    Barolin, G. S.

    1997-01-01

    This paper discusses the need to internationalize evaluation measures; unify criteria for rehabilitation scales; and refine the definitions of impairment, disability, and handicap. A model based on the International Classification of Impairments, Disabilities and Handicap (ICIDH) scales is presented, depicting the components of somatogenesis,…

  3. Coefficient Alpha and Reliability of Scale Scores

    ERIC Educational Resources Information Center

    Almehrizi, Rashid S.

    2013-01-01

    The majority of large-scale assessments develop various score scales that are either linear or nonlinear transformations of raw scores for better interpretations and uses of assessment results. The current formula for coefficient alpha (a; the commonly used reliability coefficient) only provides internal consistency reliability estimates of raw…

  4. Capturing deviation from ergodicity at different scales

    NASA Astrophysics Data System (ADS)

    Scott, Sherry E.; Redd, Thomas C.; Kuznetsov, Leonid; Mezić, Igor; Jones, Christopher K. R. T.

    2009-08-01

    We address here the issue of quantifying the extent to which a given dynamical system falls short of being ergodic and introduce a new multiscale technique which we call the “ergodicity defect”. Our approach is aimed at capturing both deviation from ergodicity and its dependence on scale. The method uses ergodic theory of dynamical systems and applies harmonic analysis, in particular the scaling analysis is motivated by wavelet theory. We base the definition of the ergodicity defect on the Birkhoff characterization. We systematically exploit the role of the observation function by using characteristic functions arising from a dyadic equipartition of the phase space. This allows us to view the dependence of the defect on scale. In order to build intuition, we consider the defect for specific examples with known dynamic properties and we are able to explicitly compute the defect for some of these simple examples. We focus on three distinctive cases of the dependence of the defect on scale: (1) a defect value that increases as the scale becomes finer, (2) a defect value decreasing with scale and (3) a defect value independent of scale, which occurs for instance when a map is ergodic. We explain the information contained in these three scenarios. We see more complicated behavior with an example which has invariant subsets at various scales.

  5. Experimental Validation of Two MIQ Scales

    ERIC Educational Resources Information Center

    Stulman, David A.; Dawis, Rene V.

    1976-01-01

    Two Minnesota Importance Questionnaire (MIQ) scales, Creativity and Independence were validated by experiment. Subjects (N=68) were exposed to four task conditions representing joint combinations of high or low levels of Creativity and Independence. The behavioral results were consistent with the subjects' MIQ score levels on the two scales,…

  6. Kalman plus weights: a time scale algorithm

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2001-01-01

    KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.

  7. Multiple time scale methods in tokamak magnetohydrodynamics

    SciTech Connect

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B/sup 2//2..mu../sub 0/, which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed.

  8. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  9. Validity of the Verbal Immediacy Scale.

    ERIC Educational Resources Information Center

    Robinson, Rena Y.; Richmond, Virginia P.

    1995-01-01

    Outlines the development and use of the Verbal Immediacy Scale. Presents data that indicate it lacks both face and construct validity. Concludes that the scale may not be a valid operationalization of the immediacy construct, and even if it is, it generates a response set such that the meaning of the responses obtained is unknown. (SR)

  10. Scaling Laws for Mesoscale and Microscale Systems

    SciTech Connect

    Spletzer, Barry

    1999-08-23

    The set of laws developed and presented here is by no means exhaustive. Techniques have been present to aid in the development of additional scaling laws and to combine these and other laws to produce additional useful relationships. Some of the relationships produced here have yielded perhaps surprising results. Examples include the fifth order scaling law for electromagnetic motor torque and the zero order scaling law for capacitive motor power. These laws demonstrate important facts about actuators in small-scale systems. The primary intent of this introduction into scaling law analysis is to provide needed tools to examine possible areas of the research in small-scale systems and direct research toward more fruitful areas. Numerous examples have been included to show the validity of developing scaling laws based on first principles and how real world systems tend to obey these laws even when many other variables may potentially come into play. Development of further laws may well serve to provide important high-level direction to the continued development of small-scale systems.

  11. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  12. Size scale effect in cavitation erosion

    NASA Technical Reports Server (NTRS)

    Rao, P. V.; Rao, B. C.; Buckley, D. H.

    1982-01-01

    An overview and data analyses pertaining to cavitation erosion size scale effects are presented. The exponents n in the power law relationship are found to vary from 1.7 to 4.9 for venturi and rotating disk devices supporting the values reported in the literature. Suggestions for future studies were made to arrive at further true scale effects.

  13. Towards a scale free electroweak baryogenesis

    NASA Astrophysics Data System (ADS)

    Ishikawa, Kazuya; Kitahara, Teppei; Takimoto, Masahiro

    2015-03-01

    We propose a new electroweak baryogenesis scenario in high-scale supersymmetric (SUSY) models. We consider a singlet extension of the minimal SUSY standard model introducing additional vectorlike multiplets. We show that the strongly first-order phase transition can occur at a high temperature comparable to the soft SUSY breaking scale. In addition, the proper amount of the baryon asymmetry of the Universe can be generated via the lepton number violating process in the vectorlike multiplet sector. The typical scale of our scenario, the soft SUSY breaking scale, can be any value. Thus our new electroweak baryogenesis scenario can be realized at arbitrary scales, and we call this scenario scale free electroweak baryogenesis. This soft SUSY breaking scale is determined by other requirements. If the soft SUSY breaking scale is O (10 ) TeV , our scenario is compatible with the observed mass of the Higgs boson and the constraints by electric dipole moment measurements and flavor experiments. Furthermore, the singlino can be a good candidate for dark matter.

  14. Scale dependence of effective media properties

    SciTech Connect

    Tidwell, V.C.; VonDoemming, J.D.; Martinez, K.

    1992-12-31

    For problems where media properties are measured at one scale and applied at another, scaling laws or models must be used in order to define effective properties at the scale of interest. The accuracy of such models will play a critical role in predicting flow and transport through the Yucca Mountain Test Site given the sensitivity of these calculations to the input property fields. Therefore, a research programhas been established to gain a fundamental understanding of how properties scale with the aim of developing and testing models that describe scaling behavior in a quantitative-manner. Scaling of constitutive rock properties is investigated through physical experimentation involving the collection of suites of gas permeability data measured over a range of discrete scales. Also, various physical characteristics of property heterogeneity and the means by which the heterogeneity is measured and described are systematically investigated to evaluate their influence on scaling behavior. This paper summarizes the approach that isbeing taken toward this goal and presents the results of a scoping study that was conducted to evaluate the feasibility of the proposed research.

  15. Developmental Work Personality Scale: An Initial Analysis.

    ERIC Educational Resources Information Center

    Strauser, David R.; Keim, Jeanmarie

    2002-01-01

    The research reported in this article involved using the Developmental Model of Work Personality to create a scale to measure work personality, the Developmental Work Personality Scale (DWPS). Overall, results indicated that the DWPS may have potential applications for assessing work personality prior to client involvement in comprehensive…

  16. Some Problems of Industrial Scale-Up.

    ERIC Educational Resources Information Center

    Jackson, A. T.

    1985-01-01

    Scientific ideas of the biological laboratory are turned into economic realities in industry only after several problems are solved. Economics of scale, agitation, heat transfer, sterilization of medium and air, product recovery, waste disposal, and future developments are discussed using aerobic respiration as the example in the scale-up…

  17. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  18. A Prestige Scale for Agricultural Occupations.

    ERIC Educational Resources Information Center

    Cosby, Arthur G.; Frank, Lianne M.

    A prestige scale for 50 agricultural and agriculturally related occupations was developed. The scale was constructed utilizing data from a mailed-questionnaire survey conducted during the spring semester of 1977 at 14 universities in the Southern United States. A 15% random sample of undergraduate majors in agriculture at these schools were…

  19. Designing the Nuclear Energy Attitude Scale.

    ERIC Educational Resources Information Center

    Calhoun, Lawrence; And Others

    1988-01-01

    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  20. Developing a News Media Literacy Scale

    ERIC Educational Resources Information Center

    Ashley, Seth; Maksl, Adam; Craft, Stephanie

    2013-01-01

    Using a framework previously applied to other areas of media literacy, this study developed and assessed a measurement scale focused specifically on critical news media literacy. Our scale appears to successfully measure news media literacy as we have conceptualized it based on previous research, demonstrated through assessments of content,…

  1. Development Tasks Supporting Scale for Fathers

    ERIC Educational Resources Information Center

    Unuvar, Perihan; Sahin, Hulya

    2011-01-01

    In present study, "development tasks supporting scale" (DTSS) for fathers has been developed. Study group consists of 205 fathers with children between ages 3-6 attending pre-school education institutions. Validity and reliability tests have been conducted on the 36-item trial form of the scale. For the validity test, expert views, explanatory and…

  2. OVERVIEW OF SCALE 6.2

    SciTech Connect

    Rearden, Bradley T; Dunn, Michael E; Wiarda, Dorothea; Celik, Cihangir; Bekar, Kursat B; Williams, Mark L; Peplow, Douglas E.; Perfetti, Christopher M; Gauld, Ian C; Wieselquist, William A; Lefebvre, Jordan P; Lefebvre, Robert A; Havluj, Frantisek; Skutnik, Steven; Dugan, Kevin

    2013-01-01

    SCALE is an industry-leading suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a plug-and-play framework that includes three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features, especially with expanded CE Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. A brief overview of SCALE capabilities is provided with emphasis on new features for SCALE 6.2.

  3. 76 FR 50881 - Required Scale Tests

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... documents in the Federal Register on January 20, 2011 (76 FR 3485) and on April 4, 2011 (76 FR 18348..., Packers and Stockyards Administration 9 CFR Part 201 RIN 0580-AB10 Required Scale Tests AGENCY: Grain... January 20, 2011, and on April 4, 2011, concerning required scale tests. Those documents defined...

  4. 76 FR 3485 - Required Scale Tests

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-20

    ... Notice of Proposed Rulemaking in the Federal Register on August 24, 2009, (74 FR 162) seeking public..., Packers and Stockyards Administration 9 CFR Part 201 RIN 0580-AB10 Required Scale Tests AGENCY: Grain... first of the two scale tests between January 1 and June 30 of the calendar year. The remaining...

  5. The Career-Related Parent Support Scale.

    ERIC Educational Resources Information Center

    Turner, Sherri L.; Alliman-Brissett, Annette; Lapan, Richard T.; Udipi, Sharanya; Ergun, Damla

    2003-01-01

    The authors describe the construction of the Career-Related Parent Support Scale and examine the validity of the scale scores within a sample of at-risk middle school adolescents. Four empirically distinct parent support factors were confirmed along A. Bandura's sources of self-efficacy information. Gender and ethnic differences in perceived…

  6. Beavers-Timberlawn Family Evaluation Scale.

    ERIC Educational Resources Information Center

    Brock, Gregory W.

    1986-01-01

    Critiques the Beavers-Timberlawn Family Evaluation Scale (BT), a measure designed to be used in conjunction with other scales to more fully identify the health/competence of a family. Reviews the assessment tool's strengths and limitations. Finds the BT to be an important contribution to clinical work. (ABB)

  7. Development of the Parent Irrational Beliefs Scale

    ERIC Educational Resources Information Center

    Kaya, Idris; Hamamci, Zeynep

    2011-01-01

    The aim of this study was to develop the scale to assess irrational beliefs of parents and test its psychometric properties. The research sample was comprised of parents whose children were attending primary schools. The results from the factor analysis were used to determine two factors in the scale: Expectations and Perfectionism. To examine the…

  8. Measurement Scales and Standard Systems in Psychology.

    ERIC Educational Resources Information Center

    Aftanas, Marion S.

    Most discussions of measurement theory are focused on "scales" of measurement, but it is not clear whether reference is made to the mechanisms of measurement or the metric information derived from measurement. This emphasis on scales in measurement theory has not always provided a meaningful or fruitful description of measurement activities in…

  9. Happiness Scale Interval Study. Methodological Considerations

    ERIC Educational Resources Information Center

    Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.

    2011-01-01

    The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous [0,10] scale, which are…

  10. Getting to Scale: Evidence, Professionalism, and Community

    ERIC Educational Resources Information Center

    Slavin, Robert E.

    2016-01-01

    Evidence-based reform, in which proven programs are scaled up to reach many students, is playing an increasing role in American education. This article summarizes articles in this issue to explain how Reading Recovery has managed to sustain itself and go to scale over more than 30 years. It argues that Reading Recovery has succeeded due to a focus…

  11. Unification and large-scale structure.

    PubMed Central

    Laing, R A

    1995-01-01

    The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored. PMID:11607609

  12. Education, Wechler's Full Scale IQ and "g."

    ERIC Educational Resources Information Center

    Colom, Roberto; Abad, Francisco J.; Garcia, Luis F.; Juan-Espinosa, Manuel

    2002-01-01

    Investigated whether average Full Scale IQ (FSIQ) differences can be attributed to "g" using the Spanish standardization sample of the Wechsler Adult Intelligence Scale III (WAIS III) (n=703 females and 666 men). Results support the conclusion that WAIS III FSIQ does not directly or exclusively measure "g" across the full range of population…

  13. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2016-06-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  14. Scaling laws predict global microbial diversity

    PubMed Central

    Locey, Kenneth J.; Lennon, Jay T.

    2016-01-01

    Scaling laws underpin unifying theories of biodiversity and are among the most predictively powerful relationships in biology. However, scaling laws developed for plants and animals often go untested or fail to hold for microorganisms. As a result, it is unclear whether scaling laws of biodiversity will span evolutionarily distant domains of life that encompass all modes of metabolism and scales of abundance. Using a global-scale compilation of ∼35,000 sites and ∼5.6⋅106 species, including the largest ever inventory of high-throughput molecular data and one of the largest compilations of plant and animal community data, we show similar rates of scaling in commonness and rarity across microorganisms and macroscopic plants and animals. We document a universal dominance scaling law that holds across 30 orders of magnitude, an unprecedented expanse that predicts the abundance of dominant ocean bacteria. In combining this scaling law with the lognormal model of biodiversity, we predict that Earth is home to upward of 1 trillion (1012) microbial species. Microbial biodiversity seems greater than ever anticipated yet predictable from the smallest to the largest microbiome. PMID:27140646

  15. Price Discrimination, Economies of Scale, and Profits.

    ERIC Educational Resources Information Center

    Park, Donghyun

    2000-01-01

    Demonstrates that it is possible for economies of scale to induce a price-discriminating monopolist to sell in an unprofitable market where the average cost always exceeds the price. States that higher profits in the profitable market caused by economies of scale may exceed losses incurred in the unprofitable market. (CMK)

  16. Further Validation of the Relational Ethics Scale.

    ERIC Educational Resources Information Center

    Hargrave, Terry D.; Bomba, Anne K.

    1993-01-01

    Conducted two studies to examine effects of marital status and age on Relational Ethics Scale. Study One indicated that scale was reliable and valid among single, never married young adults (n=162). Study Two examined differences between scores for this population and original normative sample. Findings suggest that ethical issues with…

  17. EXAMINATION OF SCALE-DEPENDENT DISPERSION COEFFICIENTS

    EPA Science Inventory

    Many hydrologists have observed that dispersion coefficients, when measured in the field, turn out to be scale-dependent. Recently, Guven, et al., (1983) presented a study which contains a basis for understanding the phenomenon of scale-dependent dispersion within a deterministic...

  18. Characterizing Soil Cracking at the Field Scale

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Physical characterization of the soil cracking has always been a major challenge in scaling soil water interaction to the field level. This scaling would allow for the soil water flow in the field to be modeled in two distinct pools: across the soil matrix and in preferential flows thus tackling maj...

  19. Reliability of Multi-Category Rating Scales

    ERIC Educational Resources Information Center

    Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.

    2013-01-01

    The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…

  20. Internal Structure of the Reflective Functioning Scale

    ERIC Educational Resources Information Center

    Taubner, Svenja; Horz, Susanne; Fischer-Kern, Melitta; Doering, Stephan; Buchheim, Anna; Zimmermann, Johannes

    2013-01-01

    The Reflective Functioning Scale (RFS) was developed to assess individual differences in the ability to mentalize attachment relationships. The RFS assesses mentalization from transcripts of the Adult Attachment Interview (AAI). A global score is given by trained coders on an 11-point scale ranging from antireflective to exceptionally reflective.…