A theory of forest dynamics: Spatially explicit models and issues of scale
NASA Technical Reports Server (NTRS)
Pacala, S.
1990-01-01
Good progress has been made in the first year of DOE grant (number sign) FG02-90ER60933. The purpose of the project is to develop and investigate models of forest dynamics that apply across a range of spatial scales. The grant is one third of a three-part project. The second third was funded by the NSF this year and is intended to provide the empirical data necessary to calibrate and test small-scale (less than or equal to 1000 ha) models. The final third was also funded this year (NASA), and will provide data to calibrate and test the large-scale features of the models.
Experimental and analytical studies of advanced air cushion landing systems
NASA Technical Reports Server (NTRS)
Lee, E. G. S.; Boghani, A. B.; Captain, K. M.; Rutishauser, H. J.; Farley, H. L.; Fish, R. B.; Jeffcoat, R. L.
1981-01-01
Several concepts are developed for air cushion landing systems (ACLS) which have the potential for improving performance characteristics (roll stiffness, heave damping, and trunk flutter), and reducing fabrication cost and complexity. After an initial screening, the following five concepts were evaluated in detail: damped trunk, filled trunk, compartmented trunk, segmented trunk, and roll feedback control. The evaluation was based on tests performed on scale models. An ACLS dynamic simulation developed earlier is updated so that it can be used to predict the performance of full-scale ACLS incorporating these refinements. The simulation was validated through scale-model tests. A full-scale ACLS based on the segmented trunk concept was fabricated and installed on the NASA ACLS test vehicle, where it is used to support advanced system development. A geometrically-scaled model (one third full scale) of the NASA test vehicle was fabricated and tested. This model, evaluated by means of a series of static and dynamic tests, is used to investigate scaling relationships between reduced and full-scale models. The analytical model developed earlier is applied to simulate both the one third scale and the full scale response.
A Study of a Mechanical Swimming Dolphin
NASA Astrophysics Data System (ADS)
Fang, Lilly; Maass, Daniel; Leftwich, Megan; Smits, Alexander
2007-11-01
A one-third scale dolphin model was constructed to investigate dolphin swimming hydrodynamics. Design and construction of the model were achieved using body coordinate data from the common dolphin (Delphinus delphis) to ensure geometric similarity. The front two-thirds of the model are rigid and stationary, while an external mechanism drives the rear third. This motion mimics the kinematics of dolphin swimming. Planar laser induced florescence (PLIF) and particle image velocimetry (PIV) are used to study the hydrodynamics of the wake and to develop a vortex skeleton model.
DOT National Transportation Integrated Search
2000-03-01
One-third-scale Model Mobile Load Simulator Mk3 (MMLS3) tests were conducted on US 281 in Jacksboro, Texas, adjacent to the full-scale Texas Mobile Load Simulator (TxMLS). The objectives were to investigate the moisture susceptibility and relative pe...
NASA Technical Reports Server (NTRS)
Pennock, A. P.; Swift, G.; Marbert, J. A.
1975-01-01
Externally blown flap models were tested for noise and performance at one-fifth scale in a static facility and at one-tenth scale in a large acoustically-treated wind tunnel. The static tests covered two flap designs, conical and ejector nozzles, third-flap noise-reduction treatments, internal blowing, and flap/nozzle geometry variations. The wind tunnel variables were triple-slotted or single-slotted flaps, sweep angle, and solid or perforated third flap. The static test program showed the following noise reductions at takeoff: 1.5 PNdB due to treating the third flap; 0.5 PNdB due to blowing from the third flap; 6 PNdB at flyover and 4.5 PNdB in the critical sideline plane (30 deg elevation) due to installation of the ejector nozzle. The wind tunnel program showed a reduction of 2 PNdB in the sideline plane due to a forward speed of 43.8 m/s (85 kn). The best combination of noise reduction concepts reduced the sideline noise of the reference aircraft at constant field length by 4 PNdB.
The Pressure Available for Ground Cooling in Front of the Cowling of Air-cooled Airplane Engines
NASA Technical Reports Server (NTRS)
Stickle, George W; Joyner, Upshur T
1938-01-01
A study was made of the factors affecting the pressure available for ground cooling in front of a cowling. Most of the results presented were obtained with a set-up that was about one-third full scale. A number of isolated tests on four full-scale airplanes were made to determine the general applicability of the model results. The full-scale tests indicated that the model results may be applied qualitatively to full-scale design and quantitatively as a first approximation of the front pressure available for ground cooling.
Confirmatory Factor Analysis of the WISC-III with Child Psychiatric Inpatients.
ERIC Educational Resources Information Center
Tupa, David J.; Wright, Margaret O'Dougherty; Fristad, Mary A.
1997-01-01
Factor models of the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) for one, two, three, and four factors were tested using confirmatory factor analysis with a sample of 177 child psychiatric inpatients. The four-factor model proposed in the WISC-III manual provided the best fit to the data. (SLD)
Posttraumatic Stress Disorder: Diagnostic Data Analysis by Data Mining Methodology
Marinić, Igor; Supek, Fran; Kovačić, Zrnka; Rukavina, Lea; Jendričko, Tihana; Kozarić-Kovačić, Dragica
2007-01-01
Aim To use data mining methods in assessing diagnostic symptoms in posttraumatic stress disorder (PTSD) Methods The study included 102 inpatients: 51 with a diagnosis of PTSD and 51 with psychiatric diagnoses other than PTSD. Several models for predicting diagnosis were built using the random forest classifier, one of the intelligent data analysis methods. The first prediction model was based on a structured psychiatric interview, the second on psychiatric scales (Clinician-administered PTSD Scale – CAPS, Positive and Negative Syndrome Scale – PANSS, Hamilton Anxiety Scale – HAMA, and Hamilton Depression Scale – HAMD), and the third on combined data from both sources. Additional models placing more weight on one of the classes (PTSD or non-PTSD) were trained, and prototypes representing subgroups in the classes constructed. Results The first model was the most relevant for distinguishing PTSD diagnosis from comorbid diagnoses such as neurotic, stress-related, and somatoform disorders. The second model pointed out the scores obtained on the Clinician-administered PTSD Scale (CAPS) and additional Positive and Negative Syndrome Scale (PANSS) scales, together with comorbid diagnoses of neurotic, stress-related, and somatoform disorders as most relevant. In the third model, psychiatric scales and the same group of comorbid diagnoses were found to be most relevant. Specialized models placing more weight on either the PTSD or non-PTSD class were able to better predict their targeted diagnoses at some expense of overall accuracy. Class subgroup prototypes mainly differed in values achieved on psychiatric scales and frequency of comorbid diagnoses. Conclusion Our work demonstrated the applicability of data mining methods for the analysis of structured psychiatric data for PTSD. In all models, the group of comorbid diagnoses, including neurotic, stress-related, and somatoform disorders, surfaced as important. The important attributes of the data, based on the structured psychiatric interview, were the current symptoms and conditions such as presence and degree of disability, hospitalizations, and duration of military service during the war, while CAPS total scores, symptoms of increased arousal, and PANSS additional criteria scores were indicated as relevant from the psychiatric symptom scales. PMID:17436383
Longitudinal Changes in Intellectual Development in Children with Fragile X Syndrome
ERIC Educational Resources Information Center
Hall, Scott S.; Burns, David D.; Lightbody, Amy A.; Reiss, Allan L.
2008-01-01
Structural equation modeling (SEM) was used to examine the development of intellectual functioning in 145 school-age pairs of siblings. Each pair included one child with Fragile X syndrome (FXS) and one unaffected sibling. All pairs of children were evaluated on the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) at time 1 and 80…
Zonal harmonic model of Saturn's magnetic field from Voyager 1 and 2 observations
NASA Technical Reports Server (NTRS)
Connerney, J. E. P.; Ness, N. F.; Acuna, M. H.
1982-01-01
An analysis of the magnetic field of Saturn is presented which takes into account both the Voyager 1 and 2 vector magnetic field observations. The analysis is based on the traditional spherical harmonic expansion of a scale potential to derive the magnetic field within 8 Saturn radii. A third-order zonal harmonic model fitted to Voyager 1 and 2 observations is found to be capable of predicting the magnetic field characteristics at one encounter based on those observed at another, unlike models including dipole and quadrupole terms only. The third-order model is noted to lead to significantly enhanced polar surface field intensities with respect to dipole models, and probably represents the axisymmetric part of a complex dynamo field.
NASA Technical Reports Server (NTRS)
Anderson, Loren A.; Armitage, Pamela Kay
1993-01-01
The 1992-1993 senior Aerospace Engineering Design class continued work on the post landing configurations for the Assured Crew Return Vehicle. The Assured Crew Return Vehicle will be permanently docked to the space station fulfilling NASA's commitment of Assured Crew Return Capability in the event of an accident or illness aboard the space station. The objective of the project was to give the Assured Crew Return Vehicle Project Office data to feed into their feasibility studies. Three design teams were given the task of developing models with dynamically and geometrically scaled characteristics. Groups one and two combined efforts to design a one-third scale model of the Russian Soyuz TM Descent Module, and an on-board flotation system. This model was designed to determine the flotation characteristics and test the effects of a rigid flotation and orientation system. Group three designed a portable water wave test facility to be located on campus. Because of additional funding from Thiokol Corporation, testing of the Soyuz model and flotation systems took place at the Offshore Technology Research Center. Universities Space Research Association has been studying the use of small expendable launch vehicles for missions which cost less than 200 million dollars. The Crusader2B. which consists of the original Spartan first and second stage with an additional Spartan second stage and the Minuteman III upper stage is being considered for this task. University of Central Florida project accomplishments include an analysis of launch techniques, a modeling technique to determine flight characteristics, and input into the redesign of an existing mobile rail launch platform.
The evolution of Zipf's law indicative of city development
NASA Astrophysics Data System (ADS)
Chen, Yanguang
2016-02-01
Zipf's law of city-size distributions can be expressed by three types of mathematical models: one-parameter form, two-parameter form, and three-parameter form. The one-parameter and one of the two-parameter models are familiar to urban scientists. However, the three-parameter model and another type of two-parameter model have not attracted attention. This paper is devoted to exploring the conditions and scopes of application of these Zipf models. By mathematical reasoning and empirical analysis, new discoveries are made as follows. First, if the size distribution of cities in a geographical region cannot be described with the one- or two-parameter model, maybe it can be characterized by the three-parameter model with a scaling factor and a scale-translational factor. Second, all these Zipf models can be unified by hierarchical scaling laws based on cascade structure. Third, the patterns of city-size distributions seem to evolve from three-parameter mode to two-parameter mode, and then to one-parameter mode. Four-year census data of Chinese cities are employed to verify the three-parameter Zipf's law and the corresponding hierarchical structure of rank-size distributions. This study is revealing for people to understand the scientific laws of social systems and the property of urban development.
ERIC Educational Resources Information Center
Lange, Rael T.; Iverson, Grant L.
2008-01-01
This study evaluated the concurrent validity of estimated Wechsler Adult Intelligence Scales-Third Edition (WAIS-III) index scores using various one- and two-subtest combinations. Participants were the Canadian WAIS-III standardization sample. Using all possible one- and two-subtest combinations, an estimated Verbal Comprehension Index (VCI), an…
The Distribution of Scaled Scores and Possible Floor Effects on the WISC-III and WAIS-III
ERIC Educational Resources Information Center
Whitaker, Simon; Wood, Christopher
2008-01-01
Objective: It has been suggested that, as the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) and the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) give a scaled score of one even if a client scores a raw score of zero, these assessments may have a hidden floor effect at low IQ levels. The study looked for…
A multi-scale approach to designing therapeutics for tuberculosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
A multi-scale approach to designing therapeutics for tuberculosis
Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...
2015-04-20
Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less
NASA Astrophysics Data System (ADS)
Durang, Xavier; Henkel, Malte
2017-12-01
Motivated by an analogy with the spherical model of a ferromagnet, the three Arcetri models are defined. They present new universality classes, either for the growth of interfaces, or else for lattice gases. They are distinct from the common Edwards-Wilkinson and Kardar-Parisi-Zhang universality classes. Their non-equilibrium evolution can be studied by the exact computation of their two-time correlators and responses. In both interpretations, the first model has a critical point in any dimension and shows simple ageing at and below criticality. The exact universal exponents are found. The second and third model are solved at zero temperature, in one dimension, where both show logarithmic sub-ageing, of which several distinct types are identified. Physically, the second model describes a lattice gas and the third model describes interface growth. A clear physical picture on the subsequent time and length scales of the sub-ageing process emerges.
Explaining social class differences in depression and well-being.
Stansfeld, S A; Head, J; Marmot, M G
1998-01-01
Work characteristics, including skill discretion and decision authority, explain most of the socioeconomic status gradient in well-being and depression in middle-aged British civil servants from the Whitehall II Study, London. Social support explained about one-third of the gradient, life events and material difficulties less than one-third. Socioeconomic status was measured by employment grade. Work characteristics were based on the Karasek model, social support was measured by the Close Persons Questionnaire, depression by the General Health Questionnaire and well-being by the Affect Balance Scale. Despite a small contribution from social selective factors measured by upward mobility, the psychosocial work environment explained most of the cross-sectional socioeconomic status gradient in well-being and depression.
Leadership: validation of a self-report scale.
Dussault, Marc; Frenette, Eric; Fernet, Claude
2013-04-01
The aim of this paper was to propose and test the factor structure of a new self-report questionnaire on leadership. A sample of 373 school principals in the Province of Quebec, Canada completed the initial 46-item version of the questionnaire. In order to obtain a questionnaire of minimal length, a four-step procedure was retained. First, items analysis was performed using Classical Test Theory. Second, Rasch analysis was used to identify non-fitting or overlapping items. Third, a confirmatory factor analysis (CFA) using structural equation modelling was performed on the 21 remaining items to verify the factor structure of the scale. Results show that the model with a single third-order dimension (leadership), two second-order dimensions (transactional and transformational leadership), and one first-order dimension (laissez-faire leadership) provides a good fit to the data. Finally, invariance of factor structure was assessed with a second sample of 222 vice-principals in the Province of Quebec, Canada. This model is in agreement with the theoretical model developed by Bass (1985), upon which the questionnaire is based.
Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study
Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng
2016-01-01
One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298
Quantifying Spot Size Reduction of a 1.8 kA Electron Beam for Flash Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burris-Mog, Trevor John; Moir, David C.
The spot size of Axis-I at the Dual Axis Radiographic Hydrodynamic Test facility was reduced by 15.5% by including a small diameter drift tube that acts to aperture the outer diameter of the electron beam. Comparing the measured values to both analytic calculations and results from a particle-in-cell model shows that one-third to one-half of the spot size reduction is due to a drop in beam emittance. We infer that one-half to two-thirds of the spot-size reduction is due to a reduction in beam-target interactions. Sources of emittance growth and the scaling of the final focal spot size with emittancemore » and solenoid aberrations are also presented.« less
Quantifying Spot Size Reduction of a 1.8 kA Electron Beam for Flash Radiography
Burris-Mog, Trevor John; Moir, David C.
2018-03-14
The spot size of Axis-I at the Dual Axis Radiographic Hydrodynamic Test facility was reduced by 15.5% by including a small diameter drift tube that acts to aperture the outer diameter of the electron beam. Comparing the measured values to both analytic calculations and results from a particle-in-cell model shows that one-third to one-half of the spot size reduction is due to a drop in beam emittance. We infer that one-half to two-thirds of the spot-size reduction is due to a reduction in beam-target interactions. Sources of emittance growth and the scaling of the final focal spot size with emittancemore » and solenoid aberrations are also presented.« less
Heterogeneity and scaling land-atmospheric water and energy fluxes in climate systems
NASA Technical Reports Server (NTRS)
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, three modeling experiments were performed and are reviewed in the paper. The first is concerned with the aggregation of parameters and inputs for a terrestrial water and energy balance model. The second experiment analyzed the scaling behavior of hydrologic responses during rain events and between rain events. The third experiment compared the hydrologic responses from distributed models with a lumped model that uses spatially constant inputs and parameters. The results show that the patterns of small scale variations can be represented statistically if the scale is larger than a representative elementary area scale, which appears to be about 2 - 3 times the correlation length of the process. For natural catchments this appears to be about 1 - 2 sq km. The results concerning distributed versus lumped representations are more complicated. For conditions when the processes are nonlinear, then lumping results in biases; otherwise a one-dimensional model based on 'equivalent' parameters provides quite good results. Further research is needed to fully understand these conditions.
NASA Technical Reports Server (NTRS)
Blunck, R. D.; Krantz, D. E.
1974-01-01
An account of activities and data gathered in the Room Temperature Stretch Forming of One-third Scale External Tank Bulkhead Gores for space shuttle study, and a tooling design and production cost study are reported. The following study phases are described: (1) the stretch forming of three approximately one-third scale external tank dome gores from single sheets of 2219-T37 aluminum alloy; (2) the designing of a full scale production die, including a determination of tooling requirements; and (3) the determination of cost per gore at the required production rates, including manufacturing, packaging, and shipping.
Airframe noise prediction evaluation
NASA Technical Reports Server (NTRS)
Yamamoto, Kingo J.; Donelson, Michael J.; Huang, Shumei C.; Joshi, Mahendra C.
1995-01-01
The objective of this study is to evaluate the accuracy and adequacy of current airframe noise prediction methods using available airframe noise measurements from tests of a narrow body transport (DC-9) and a wide body transport (DC-10) in addition to scale model test data. General features of the airframe noise from these aircraft and models are outlined. The results of the assessment of two airframe prediction methods, Fink's and Munson's methods, against flight test data of these aircraft and scale model wind tunnel test data are presented. These methods were extensively evaluated against measured data from several configurations including clean, slat deployed, landing gear-deployed, flap deployed, and landing configurations of both DC-9 and DC-10. They were also assessed against a limited number of configurations of scale models. The evaluation was conducted in terms of overall sound pressure level (OASPL), tone corrected perceived noise level (PNLT), and one-third-octave band sound pressure level (SPL).
Code of Federal Regulations, 2010 CFR
2010-01-01
... inch in diameter; (b) Splits when onions with two or more hearts are not practically covered by one or...; (g) Sunburn when more than 33 percent of the onions in a lot have a medium green color on one-third... than one fleshy scale, or when any bruise breaks a fleshy scale; and, (n) Translucent scales when more...
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Canuto, V. M.
1997-06-01
We present a model to treat fully compressible, nonlocal, time-dependent turbulent convection in the presence of large-scale flows and arbitrary density stratification. The problem is of interest, for example, in stellar pulsation problems, especially since accurate helioseismological data are now available, as well as in accretion disks. Owing to the difficulties in formulating an analytical model, it is not surprising that most of the work has gone into numerical simulations. At present, there are three analytical models: one by the author, which leads to a rather complicated set of equations; one by Yoshizawa; and one by Xiong. The latter two use a Reynolds stress model together with phenomenological relations with adjustable parameters whose determination on the basis of terrestrial flows does not guarantee that they may be extrapolated to astrophysical flows. Moreover, all third-order moments representing nonlocality are taken to be of the down gradient form (which in the case of the planetary boundary layer yields incorrect results). In addition, correlations among pressure, temperature, and velocities are often neglected or treated as in the incompressible case. To avoid phenomenological relations, we derive the full set of dynamic, time-dependent, nonlocal equations to describe all mean variables, second- and third-order moments. Closures are carried out at the fourth order following standard procedures in turbulence modeling. The equations are collected in an Appendix. Some of the novelties of the treatment are (1) new flux conservation law that includes the large-scale flow, (2) increase of the rate of dissipation of turbulent kinetic energy owing to compressibility and thus (3) a smaller overshooting, and (4) a new source of mean temperature due to compressibility; moreover, contrary to some phenomenological suggestions, the adiabatic temperature gradient depends only on the thermal pressure, while in the equation for the large-scale flow, the physical pressure is the sum of thermal plus turbulent pressure.
Numerical method based on the lattice Boltzmann model for the Fisher equation.
Yan, Guangwu; Zhang, Jianying; Dong, Yinfeng
2008-06-01
In this paper, a lattice Boltzmann model for the Fisher equation is proposed. First, the Chapman-Enskog expansion and the multiscale time expansion are used to describe higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. Second, the modified partial differential equation of the Fisher equation with the higher-order truncation error is obtained. Third, comparison between numerical results of the lattice Boltzmann models and exact solution is given. The numerical results agree well with the classical ones.
Using LISREL to Evaluate Measurement Models and Scale Reliability.
ERIC Educational Resources Information Center
Fleishman, John; Benson, Jeri
1987-01-01
LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…
ERIC Educational Resources Information Center
Callens, Andy M.; Atchison, Timothy B.; Engler, Rachel R.
2009-01-01
Instructions for the Matrix Reasoning Test (MRT) of the Wechsler Adult Intelligence Scale-Third Edition were modified by explicitly stating that the subtest was untimed or that a per-item time limit would be imposed. The MRT was administered within one of four conditions: with (a) standard administration instructions, (b) explicit instructions…
ERIC Educational Resources Information Center
Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia
2014-01-01
Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…
Application Perspective of 2D+SCALE Dimension
NASA Astrophysics Data System (ADS)
Karim, H.; Rahman, A. Abdul
2016-09-01
Different applications or users need different abstraction of spatial models, dimensionalities and specification of their datasets due to variations of required analysis and output. Various approaches, data models and data structures are now available to support most current application models in Geographic Information System (GIS). One of the focuses trend in GIS multi-dimensional research community is the implementation of scale dimension with spatial datasets to suit various scale application needs. In this paper, 2D spatial datasets that been scaled up as the third dimension are addressed as 2D+scale (or 3D-scale) dimension. Nowadays, various data structures, data models, approaches, schemas, and formats have been proposed as the best approaches to support variety of applications and dimensionality in 3D topology. However, only a few of them considers the element of scale as their targeted dimension. As the scale dimension is concerned, the implementation approach can be either multi-scale or vario-scale (with any available data structures and formats) depending on application requirements (topology, semantic and function). This paper attempts to discuss on the current and new potential applications which positively could be integrated upon 3D-scale dimension approach. The previous and current works on scale dimension as well as the requirements to be preserved for any given applications, implementation issues and future potential applications forms the major discussion of this paper.
Prediction of flyover jet noise spectra from static tests
NASA Technical Reports Server (NTRS)
Michel, U.; Michalke, A.
1981-01-01
A scaling law is derived for predicting the flyover noise spectra of a single-stream shock-free circular jet from static experiments. The theory is based on the Lighthill approach to jet noise. Density terms are retained to include the effects of jet heating. The influence of flight on the turbulent flow field is considered by an experimentally supported similarity assumption. The resulting scaling laws for the difference between one-third-octave spectra and the overall sound pressure level compare very well with flyover experiments with a jet engine and with wind tunnel experiments with a heated model jet.
Bosco, Francesca M; Gabbatore, Ilaria; Tirassa, Maurizio; Testa, Silvia
2016-01-01
This research aimed at the evaluation of the psychometric properties of the Theory of Mind Assessment Scale (Th.o.m.a.s.). Th.o.m.a.s. is a semi-structured interview meant to evaluate a person's Theory of Mind (ToM). It is composed of several questions organized in four scales, each focusing on one of the areas of knowledge in which such faculty may manifest itself: Scale A (I-Me) investigates first-order first-person ToM; Scale B (Other-Self) investigates third-person ToM from an allocentric perspective; Scale C (I-Other) again investigates third-person ToM, but from an egocentric perspective; and Scale D (Other-Me) investigates second-order ToM. The psychometric proprieties of Th.o.m.a.s. were evaluated in a sample of 156 healthy persons: 80 preadolescent and adolescent (aged 11-17 years, 42 females) and 76 adults (aged from 20 to 67 years, 35 females). Th.o.m.a.s. scores show good inter-rater agreement and internal consistency; the scores increase with age. Evidence of criterion validity was found as Scale B scores were correlated with those of an independent instrument for the evaluation of ToM, the Strange Stories task. Confirmatory factor analysis (CFA) showed good fit of the four-factors theoretical model to the data, although the four factors were highly correlated. For each of the four scales, Rasch analyses showed that, with few exceptions, items fitted the Partial credit model and their functioning was invariant for gender and age. The results of this study, along with those of previous researches with clinical samples, show that Th.o.m.a.s. is a promising instrument to assess ToM in different populations.
Psychometric Properties of the Theory of Mind Assessment Scale in a Sample of Adolescents and Adults
Bosco, Francesca M.; Gabbatore, Ilaria; Tirassa, Maurizio; Testa, Silvia
2016-01-01
This research aimed at the evaluation of the psychometric properties of the Theory of Mind Assessment Scale (Th.o.m.a.s.). Th.o.m.a.s. is a semi-structured interview meant to evaluate a person's Theory of Mind (ToM). It is composed of several questions organized in four scales, each focusing on one of the areas of knowledge in which such faculty may manifest itself: Scale A (I-Me) investigates first-order first-person ToM; Scale B (Other-Self) investigates third-person ToM from an allocentric perspective; Scale C (I-Other) again investigates third-person ToM, but from an egocentric perspective; and Scale D (Other-Me) investigates second-order ToM. The psychometric proprieties of Th.o.m.a.s. were evaluated in a sample of 156 healthy persons: 80 preadolescent and adolescent (aged 11–17 years, 42 females) and 76 adults (aged from 20 to 67 years, 35 females). Th.o.m.a.s. scores show good inter-rater agreement and internal consistency; the scores increase with age. Evidence of criterion validity was found as Scale B scores were correlated with those of an independent instrument for the evaluation of ToM, the Strange Stories task. Confirmatory factor analysis (CFA) showed good fit of the four-factors theoretical model to the data, although the four factors were highly correlated. For each of the four scales, Rasch analyses showed that, with few exceptions, items fitted the Partial credit model and their functioning was invariant for gender and age. The results of this study, along with those of previous researches with clinical samples, show that Th.o.m.a.s. is a promising instrument to assess ToM in different populations. PMID:27242563
Dynamic Smagorinsky model on anisotropic grids
NASA Technical Reports Server (NTRS)
Scotti, A.; Meneveau, C.; Fatica, M.
1996-01-01
Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.
Research activities at the Center for Modeling of Turbulence and Transition
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing
1993-01-01
The main research activities at the Center for Modeling of Turbulence and Transition (CMOTT) are described. The research objective of CMOTT is to improve and/or develop turbulence and transition models for propulsion systems. The flows of interest in propulsion systems can be both compressible and incompressible, three dimensional, bounded by complex wall geometries, chemically reacting, and involve 'bypass' transition. The most relevant turbulence and transition models for the above flows are one- and two-equation eddy viscosity models, Reynolds stress algebraic- and transport-equation models, pdf models, and multiple-scale models. All these models are classified as one-point closure schemes since only one-point (in time and space) turbulent correlations, such as second moments (Reynolds stresses and turbulent heat fluxes) and third moments, are involved. In computational fluid dynamics, all turbulent quantities are one-point correlations. Therefore, the study of one-point turbulent closure schemes is the focus of our turbulence research. However, other research, such as the renormalization group theory, the direct interaction approximation method, and numerical simulations are also pursued to support the development of turbulence modeling.
McCrae, Robert R; Scally, Matthew; Terracciano, Antonio; Abecasis, Gonçalo R; Costa, Paul T
2010-12-01
There is growing evidence that personality traits are affected by many genes, all of which have very small effects. As an alternative to the largely unsuccessful search for individual polymorphisms associated with personality traits, the authors identified large sets of potentially related single nucleotide polymorphisms (SNPs) and summed them to form molecular personality scales (MPSs) with from 4 to 2,497 SNPs. Scales were derived from two thirds of a large (N = 3,972) sample of individuals from Sardinia who completed the Revised NEO Personality Inventory (P. T. Costa, Jr., & R. R. McCrae, 1992) and were assessed in a genomewide association scan. When MPSs were correlated with the phenotype in the remaining one third of the sample, very small but significant associations were found for 4 of the 5e personality factors when the longest scales were examined. These data suggest that MPSs for Neuroticism, Openness to Experience, Agreeableness, and Conscientiousness (but not Extraversion) contain genetic information that can be refined in future studies, and the procedures described here should be applicable to other quantitative traits. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Patient and Societal Value Functions for the Testing Morbidities Index
Swan, John Shannon; Kong, Chung Yin; Lee, Janie M.; Akinyemi, Omosalewa; Halpern, Elkan F.; Lee, Pablo; Vavinskiy, Sergey; Williams, Olubunmi; Zoltick, Emilie S.; Donelan, Karen
2013-01-01
Background We developed preference-based and summated scale scoring for the Testing Morbidities Index (TMI) classification, which addresses short-term effects on quality of life from diagnostic testing before, during and after a testing procedure. Methods The two TMI value functions utilize multiattribute value techniques; one is patient-based and the other has a societal perspective. 206 breast biopsy patients and 466 (societal) subjects informed the models. Due to a lack of standard short-term methods for this application, we utilized the visual analog scale (VAS). Waiting trade-off (WTO) tolls provided an additional option for linear transformation of the TMI. We randomized participants to one of three surveys: the first derived weights for generic testing morbidity attributes and levels of severity with the VAS; a second developed VAS values and WTO tolls for linear transformation of the TMI to a death-healthy scale; the third addressed initial validation in a specific test (breast biopsy). 188 patients and 425 community subjects participated in initial validation, comparing direct VAS and WTO values to the TMI. Alternative TMI scoring as a non-preference summated scale was included, given evidence of construct and content validity. Results The patient model can use an additive function, while the societal model is multiplicative. Direct VAS and the VAS-scaled TMI were correlated across modeling groups (r=0.45 to 0.62) and agreement was comparable to the value function validation of the Health Utilities Index 2. Mean Absolute Difference (MAD) calculations showed a range of 0.07–0.10 in patients and 0.11–0.17 in subjects. MAD for direct WTO tolls compared to the WTO-scaled TMI varied closely around one quality-adjusted life day. Conclusions The TMI shows initial promise in measuring short-term testing-related health states. PMID:23689044
Design of a V/STOL propulsion system for a large-scale fighter model
NASA Technical Reports Server (NTRS)
Willis, W. S.
1981-01-01
Modifications were made to the existing Large-Scale STOL fighter model to simulate a V/STOL configuration. Modifications include the substitutions of two dimensional lift/cruise exhaust nozzles in the nacelles, and the addition of a third J97 engine in the fuselage to suppy a remote exhaust nozzle simulating a Remote Augmented Lift System. A preliminary design of the inlet and exhaust ducting for the third engine was developed and a detailed design was completed of the hot exhaust ducting and remote nozzle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L.L.; Trent, D.S.
The TEMPEST computer program was used to simulate fluid and thermal mixing in the cold leg and downcomer of a pressurized water reactor under emergency core cooling high-pressure injection (HPI), which is of concern to the pressurized thermal shock (PTS) problem. Application of the code was made in performing an analysis simulation of a full-scale Westinghouse three-loop plant design cold leg and downcomer. Verification/assessment of the code was performed and analysis procedures developed using data from Creare 1/5-scale experimental tests. Results of three simulations are presented. The first is a no-loop-flow case with high-velocity, low-negative-buoyancy HPI in a 1/5-scale modelmore » of a cold leg and downcomer. The second is a no-loop-flow case with low-velocity, high-negative density (modeled with salt water) injection in a 1/5-scale model. Comparison of TEMPEST code predictions with experimental data for these two cases show good agreement. The third simulation is a three-dimensional model of one loop of a full size Westinghouse three-loop plant design. Included in this latter simulation are loop components extending from the steam generator to the reactor vessel and a one-third sector of the vessel downcomer and lower plenum. No data were available for this case. For the Westinghouse plant simulation, thermally coupled conduction heat transfer in structural materials is included. The cold leg pipe and fluid mixing volumes of the primary pump, the stillwell, and the riser to the steam generator are included in the model. In the reactor vessel, the thermal shield, pressure vessel cladding, and pressure vessel wall are thermally coupled to the fluid and thermal mixing in the downcomer. The inlet plenum mixing volume is included in the model. A 10-min (real time) transient beginning at the initiation of HPI is computed to determine temperatures at the beltline of the pressure vessel wall.« less
Correlation Scales of the Turbulent Cascade at 1 au
NASA Astrophysics Data System (ADS)
Smith, Charles W.; Vasquez, Bernard J.; Coburn, Jesse T.; Forman, Miriam A.; Stawarz, Julia E.
2018-05-01
We examine correlation functions of the mixed, third-order expressions that, when ensemble-averaged, describe the cascade of energy in the inertial range of magnetohydrodynamic turbulence. Unlike the correlation function of primitive variables such as the magnetic field, solar wind velocity, temperature, and density, the third-order expressions decorrelate at a scale that is approximately 20% of the lag. This suggests the nonlinear dynamics decorrelate in less than one wavelength. Therefore, each scale can behave differently from one wavelength to the next. In the same manner, different scales within the inertial range can behave independently at any given time or location. With such a cascade that can be strongly patchy and highly variable, it is often possible to obtain negative cascade rates for short periods of time, as reported earlier for individual samples of data.
Self-stress control of real civil engineering tensegrity structures
NASA Astrophysics Data System (ADS)
Kłosowska, Joanna; Obara, Paulina; Gilewski, Wojciech
2018-01-01
The paper introduces the impact of the self-stress level on the behaviour of the tensegrity truss structures. Displacements for real civil engineering tensegrity structures are analysed. Full-scale tensegrity tower Warnow Tower which consists of six Simplex trusses is considered in this paper. Three models consisting of one, two and six modules are analysed. The analysis is performed by the second and third order theory. Mathematica software and Sofistik programme is applied to the analysis.
NASA Astrophysics Data System (ADS)
Garcia Cartagena, Edgardo Javier; Santoni, Christian; Ciri, Umberto; Iungo, Giacomo Valerio; Leonardi, Stefano
2015-11-01
A large-scale wind farm operating under realistic atmospheric conditions is studied by coupling a meso-scale and micro-scale models. For this purpose, the Weather Research and Forecasting model (WRF) is coupled with an in-house LES solver for wind farms. The code is based on a finite difference scheme, with a Runge-Kutta, fractional step and the Actuator Disk Model. The WRF model has been configured using seven one-way nested domains where the child domain has a mesh size one third of its parent domain. A horizontal resolution of 70 m is used in the innermost domain. A section from the smallest and finest nested domain, 7.5 diameters upwind of the wind farm is used as inlet boundary condition for the LES code. The wind farm consists in six-turbines aligned with the mean wind direction and streamwise spacing of 10 rotor diameters, (D), and 2.75D in the spanwise direction. Three simulations were performed by varying the velocity fluctuations at the inlet: random perturbations, precursor simulation, and recycling perturbation method. Results are compared with a simulation on the same wind farm with an ideal uniform wind speed to assess the importance of the time varying incoming wind velocity. Numerical simulations were performed at TACC (Grant CTS070066). This work was supported by NSF, (Grant IIA-1243482 WINDINSPIRE).
Code of Federal Regulations, 2012 CFR
2012-01-01
...; (g) Sunburn when more than 33 percent of the onions in a lot have a medium green color on one-third...; (i) Peeling when more than one-half of the thin papery skin is missing, leaving the underlying fleshy...) Watery scales when more than the equivalent of the entire outer fleshy scale is affected by an off-color...
Code of Federal Regulations, 2011 CFR
2011-01-01
...; (g) Sunburn when more than 33 percent of the onions in a lot have a medium green color on one-third...; (i) Peeling when more than one-half of the thin papery skin is missing, leaving the underlying fleshy...) Watery scales when more than the equivalent of the entire outer fleshy scale is affected by an off-color...
WRF/CMAQ AQMEII3 Simulations of U.S. Regional-Scale Ozone: Sensitivity to Processes and Inputs
Chemical boundary conditions are a key input to regional-scale photochemical models. In this study, performed during the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3), we perform annual simulations over North America with chemical boundary con...
NASA Technical Reports Server (NTRS)
Falarski, M. D.
1972-01-01
A wind tunnel investigation was made of the noise characteristics of a 4.42 m(14.5 foot) semispan, externally-blown jet flap model. The model was equipped with a single 76.2 cm(30 inch) diameter, ducted fan with a 1.03 pressure ratio. The effects of flap size, fan vertical location, and forward speed on the noise characteristics were studied. The data from the investigation is presented in the form of tabulated one-third octave band frequency spectrums and perceived noise levels for each test condition.
NASA Astrophysics Data System (ADS)
Lee-Cullin, J. A.; Zarnetske, J. P.; Wiewiora, E.; Ruhala, S.; Hampton, T. B.
2016-12-01
Dissolved organic carbon (DOC) is a critical component to biogeochemical cycling and water quality in surface waters. As DOC moves through stream networks, from headwaters to higher order streams, the sediment-water interface (SWI), where streams and groundwater readily interact, exerts a strong influence on DOC concentrations and compositional characteristics (i.e., molecular properties). Few studies examine SWI patterns at larger spatial scales, instead focusing primarily on site-level studies because sampling in the SWI is methodologically time and labor intensive. It is presently unknown how land use and landcover influence the fate of DOC in the SWI and therefore the function of the SWI on catchment-scale DOC conditions. Here, we performed a catchment-scale, high spatial-resolution SWI sampling campaign to test how landscape pattern DOC signatures are propagated into the stream and groundwater, and to assess the fate of these signatures when DOC travels through the SWI. We sampled across 39 sites composed of first-, second-, and third-order locations in a lowland, third-order catchment composed of diverse landscape units and properties, including wetland, upland forest, and agriculture. At each of these locations, surface water, groundwater, and SWI water were collected, including six discrete depths across the SWI. The major land use and landcover properties were also determined for each of these locations. We developed two simple generalized linear models to identify the landscape properties with greatest explanatory power for DOC conditions - one for stream water and one for groundwater. The correlation between landscape properties and surface water DOC characteristics was stronger than between landscape properties and groundwater DOC characteristics. To test if the DOC properties from surface and groundwater were preserved or removed by the SWI, the resulting best-fit models for each water source were used to predict the DOC conditions across the SWI. The models were unable to predict SWI DOC conditions, indicating that the landscape signature present in both the surface water and groundwater is removed by processes occurring in the SWI. Overall, this suggests that the SWI functions as and effective zone for processing the landscape-derived DOC signatures.
NASA Astrophysics Data System (ADS)
Lee-Cullin, J. A.; Zarnetske, J. P.; Wiewiora, E.; Ruhala, S.; Hampton, T. B.
2017-12-01
Dissolved organic carbon (DOC) is a critical component to biogeochemical cycling and water quality in surface waters. As DOC moves through stream networks, from headwaters to higher order streams, the sediment-water interface (SWI), where streams and groundwater readily interact, exerts a strong influence on DOC concentrations and compositional characteristics (i.e., molecular properties). Few studies examine SWI patterns at larger spatial scales, instead focusing primarily on site-level studies because sampling in the SWI is methodologically time and labor intensive. It is presently unknown how land use and landcover influence the fate of DOC in the SWI and therefore the function of the SWI on catchment-scale DOC conditions. Here, we performed a catchment-scale, high spatial-resolution SWI sampling campaign to test how landscape pattern DOC signatures are propagated into the stream and groundwater, and to assess the fate of these signatures when DOC travels through the SWI. We sampled across 39 sites composed of first-, second-, and third-order locations in a lowland, third-order catchment composed of diverse landscape units and properties, including wetland, upland forest, and agriculture. At each of these locations, surface water, groundwater, and SWI water were collected, including six discrete depths across the SWI. The major land use and landcover properties were also determined for each of these locations. We developed two simple generalized linear models to identify the landscape properties with greatest explanatory power for DOC conditions - one for stream water and one for groundwater. The correlation between landscape properties and surface water DOC characteristics was stronger than between landscape properties and groundwater DOC characteristics. To test if the DOC properties from surface and groundwater were preserved or removed by the SWI, the resulting best-fit models for each water source were used to predict the DOC conditions across the SWI. The models were unable to predict SWI DOC conditions, indicating that the landscape signature present in both the surface water and groundwater is removed by processes occurring in the SWI. Overall, this suggests that the SWI functions as and effective zone for processing the landscape-derived DOC signatures.
ERIC Educational Resources Information Center
Clements, Douglas H.; Sarama, Julie; Wolfe, Christopher B.; Spitler, Mary Elaine
2013-01-01
Using a cluster randomized trial design, we evaluated the persistence of effects of a research-based model for scaling up educational interventions. The model was implemented in 42 schools in two city districts serving low-resource communities, randomly assigned to three conditions. In pre-kindergarten, the two experimental interventions were…
Flavor gauge models below the Fermi scale
Babu, K. S.; Friedland, A.; Machado, P. A. N.; ...
2017-12-18
The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson,more » $X$, corresponding to the $B-L$ symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, $B^+$, $D^+$ and Upsilon decays, $$D-\\bar{D}^0$$ mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling $$g_X$$ in the range $$(10^{-2} - 10^{-4})$$ the model is shown to be consistent with the data. Possible ways of testing the model in $b$ physics, top and $Z$ decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. In conclusion, the proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.« less
Flavor gauge models below the Fermi scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babu, K. S.; Friedland, A.; Machado, P. A. N.
The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson,more » $X$, corresponding to the $B-L$ symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, $B^+$, $D^+$ and Upsilon decays, $$D-\\bar{D}^0$$ mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling $$g_X$$ in the range $$(10^{-2} - 10^{-4})$$ the model is shown to be consistent with the data. Possible ways of testing the model in $b$ physics, top and $Z$ decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. In conclusion, the proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.« less
Papercraft temporal bone in the first step of anatomy education.
Hiraumi, Harukazu; Sato, Hiroaki; Ito, Juichi
2017-06-01
(1) To compare temporal bone anatomy comprehension taught to speech therapy students with or without a papercraft model. (2) To explore the effect of papercraft simulation on the understanding of surgical approaches in first-year residents. (1) One-hundred and ten speech therapy students were divided into three classes. The first class was taught with a lecture only. The students in the second class were given a lecture and a papercraft modeling task without instruction. The third class modeled a papercraft with instruction after the lecture. The students were tested on their understanding of temporal bone anatomy. (2) A questionnaire on the understanding of surgical approaches was completed by 10 residents before and after the papercraft modeling. The papercraft models were cut with scissors to simulate surgical approaches. (1) The average scores were 4.4/8 for the first class, 4.3/8 for the second class, and 6.3/8 for the third class. The third class had significantly better results than the other classes (p<0.01, Kruskal-Wallis test). (2) The average scores before and after the papercraft modeling and cutting were 2.6/7 and 4.9/7, respectively. The numerical rating scale score significantly improved (p<0.01, Wilcoxon signed-rank test). The instruction of the anatomy using a papercraft temporal bone model is effective in the first step of learning temporal bone anatomy and surgical approaches. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Devaraju, N; Bala, G; Nemani, R
2015-09-01
Land-use changes since the start of the industrial era account for nearly one-third of the cumulative anthropogenic CO2 emissions. In addition to the greenhouse effect of CO2 emissions, changes in land use also affect climate via changes in surface physical properties such as albedo, evapotranspiration and roughness length. Recent modelling studies suggest that these biophysical components may be comparable with biochemical effects. In regard to climate change, the effects of these two distinct processes may counterbalance one another both regionally and, possibly, globally. In this article, through hypothetical large-scale deforestation simulations using a global climate model, we contrast the implications of afforestation on ameliorating or enhancing anthropogenic contributions from previously converted (agricultural) land surfaces. Based on our review of past studies on this subject, we conclude that the sum of both biophysical and biochemical effects should be assessed when large-scale afforestation is used for countering global warming, and the net effect on global mean temperature change depends on the location of deforestation/afforestation. Further, although biochemical effects trigger global climate change, biophysical effects often cause strong local and regional climate change. The implication of the biophysical effects for adaptation and mitigation of climate change in agriculture and agroforestry sectors is discussed. © 2014 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Lockheed, Marlaine E.
2015-01-01
The number of countries that regularly participate in international large-scale assessments has increased sharply over the past 15 years, with the share of countries participating in the Programme for International Student Assessment growing from one-fifth of countries in 2000 to over one-third of countries in 2015. What accounts for this…
NASA Technical Reports Server (NTRS)
Christhilf, David M.; Moulin, Boris; Ritz, Erich; Chen, P. C.; Roughen, Kevin M.; Perry, Boyd
2012-01-01
The Semi-Span Supersonic Transport (S4T) is an aeroelastically scaled wind-tunnel model built to test active controls concepts for large flexible supersonic aircraft in the transonic flight regime. It is one of several models constructed in the 1990's as part of the High Speed Research (HSR) Program. Control laws were developed for the S4T by M4 Engineering, Inc. and by Zona Technologies, Inc. under NASA Research Announcement (NRA) contracts. The model was tested in the NASA-Langley Transonic Dynamics Tunnel (TDT) four times from 2007 to 2010. The first two tests were primarily for plant identification. The third entry was used for testing control laws for Ride Quality Enhancement, Gust Load Alleviation, and Flutter Suppression. Whereas the third entry only tested FS subcritically, the fourth test demonstrated closed-loop operation above the open-loop flutter boundary. The results of the third entry are reported elsewhere. This paper reports on flutter suppression results from the fourth wind-tunnel test. Flutter suppression is seen as a way to provide stability margins while flying at transonic flight conditions without penalizing the primary supersonic cruise design condition. An account is given for how Controller Performance Evaluation (CPE) singular value plots were interpreted with regard to progressing open- or closed-loop to higher dynamic pressures during testing.
Words and Deeds about Altruism and the Subsequent Reinforcement Power of the Model.
ERIC Educational Resources Information Center
Bryan, James H.; And Others
Ninety-six second and third grade children were exposed to one of six types of videotaped models. Children witnessed an adult female practice either charitable or selfish behavior. One-third of the subjects in each group heard the model exhort either charity or greed or verbalize normatively neutral material. Following this exposure, half the…
NASA Astrophysics Data System (ADS)
Parsakhoo, Zahra; Shao, Yaping
2017-04-01
Near-surface turbulent mixing has considerable effect on surface fluxes, cloud formation and convection in the atmospheric boundary layer (ABL). Its quantifications is however a modeling and computational challenge since the small eddies are not fully resolved in Eulerian models directly. We have developed a Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer based on the Ito Stochastic Differential Equation (SDE) for air parcels (particles). Due to the complexity of the mixing in the ABL, we find that linear Ito SDE cannot represent convections properly. Three strategies have been tested to solve the problem: 1) to make the deterministic term in the Ito equation non-linear; 2) to change the random term in the Ito equation fractional, and 3) to modify the Ito equation by including Levy flights. We focus on the third strategy and interpret mixing as interaction between at least two stochastic processes with different Lagrangian time scales. The model is in progress to include the collisions among the particles with different characteristic and to apply the 3D model for real cases. One application of the model is emphasized: some land surface patterns are generated and then coupled with the Large Eddy Simulation (LES).
The influence of mesoscale and submesoscale heterogeneity on ocean biogeochemical reactions
NASA Astrophysics Data System (ADS)
Levy, M.; Martin, A. P.
2013-12-01
The oceanic circulation in the meso to submesoscale regime generates heterogeneity in the concentrations of biogeochemical components over these scales, horizontally between 1 and 100 km. Due to nonlinearities in the biogeochemical reactions, such as phytoplankton primary production and zooplankton grazing, this small-scale heterogeneity can lead to departure from the mean field approximation, whereby plankton reactions are evaluated from mean distributions at coarser scale. Here we explore the magnitude of these eddy reactions and compare their strength to those of the more widely studied eddy transports. We use the term eddy to denote effects arising from scales smaller than ˜ 100 km. This is done using a submesoscale permitting biogeochemical model, representative of the seasonally varying subtropical and subpolar gyres. We found that the eddy reactions associated with primary production and grazing account for ±5-30% of productivity and grazing, respectively, depending on location and time of year, and are scale dependent: two thirds are due to heterogeneities at scales 30-100 km and one third to those at scales below 30 km. Moreover, eddy productivities are systematically negative, implying that production tends to be reduced by nonlinear interactions at the mesoscale and smaller. The opposite result is found for eddy grazing, which is generally positive. The contrasting effects result from vertical advection, which negatively correlates phytoplankton and nutrients and positively correlates phytoplankton and zooplankton in the meso to submesoscale range. Moreover, our results highlight the central role played by eddy reactions for ecological aspects and the distribution of organisms and by eddy transport for biogeochemical aspects and nutrient budgets.
Network Thermodynamic Curation of Human and Yeast Genome-Scale Metabolic Models
Martínez, Verónica S.; Quek, Lake-Ee; Nielsen, Lars K.
2014-01-01
Genome-scale models are used for an ever-widening range of applications. Although there has been much focus on specifying the stoichiometric matrix, the predictive power of genome-scale models equally depends on reaction directions. Two-thirds of reactions in the two eukaryotic reconstructions Homo sapiens Recon 1 and Yeast 5 are specified as irreversible. However, these specifications are mainly based on biochemical textbooks or on their similarity to other organisms and are rarely underpinned by detailed thermodynamic analysis. In this study, a to our knowledge new workflow combining network-embedded thermodynamic and flux variability analysis was used to evaluate existing irreversibility constraints in Recon 1 and Yeast 5 and to identify new ones. A total of 27 and 16 new irreversible reactions were identified in Recon 1 and Yeast 5, respectively, whereas only four reactions were found with directions incorrectly specified against thermodynamics (three in Yeast 5 and one in Recon 1). The workflow further identified for both models several isolated internal loops that require further curation. The framework also highlighted the need for substrate channeling (in human) and ATP hydrolysis (in yeast) for the essential reaction catalyzed by phosphoribosylaminoimidazole carboxylase in purine metabolism. Finally, the framework highlighted differences in proline metabolism between yeast (cytosolic anabolism and mitochondrial catabolism) and humans (exclusively mitochondrial metabolism). We conclude that network-embedded thermodynamics facilitates the specification and validation of irreversibility constraints in compartmentalized metabolic models, at the same time providing further insight into network properties. PMID:25028891
Flow control about an airborne laser turret
NASA Astrophysics Data System (ADS)
Penix, L. E.
1982-06-01
This thesis project is the latest in a series of experiments conducted at the Naval Postgraduate School to improve the air flow in which a laser beam propagates. The particular turret to be studied is currently employed on Airborne Laser Laboratory which is aboard the NKC-135 aircraft; a one-third scale model was constructed in the 5 x 5 foot wind tunnel. The objective is to decrease the optical path distortion and jitter resulting from turbulent flow in the aft hemisphere of the turret that houses the laser telescope.
Scale-dependent cyclone-anticyclone asymmetry in a forced rotating turbulence experiment
NASA Astrophysics Data System (ADS)
Gallet, B.; Campagne, A.; Cortet, P.-P.; Moisy, F.
2014-03-01
We characterize the statistical and geometrical properties of the cyclone-anticyclone asymmetry in a statistically steady forced rotating turbulence experiment. Turbulence is generated by a set of vertical flaps which continuously inject velocity fluctuations towards the center of a tank mounted on a rotating platform. We first characterize the cyclone-anticyclone asymmetry from conventional single-point vorticity statistics. We propose a phenomenological model to explain the emergence of the asymmetry in the experiment, from which we predict scaling laws for the root-mean-square velocity in good agreement with the experimental data. We further quantify the cyclone-anticyclone asymmetry using a set of third-order two-point velocity correlations. We focus on the correlations which are nonzero only if the cyclone-anticyclone symmetry is broken. They offer two advantages over single-point vorticity statistics: first, they are defined from velocity measurements only, so an accurate resolution of the Kolmogorov scale is not required; second, they provide information on the scale-dependence of the cyclone-anticyclone asymmetry. We compute these correlation functions analytically for a random distribution of independent identical vortices. These model correlations describe well the experimental ones, indicating that the cyclone-anticyclone asymmetry is dominated by the large-scale long-lived cyclones.
Staged anticonvulsant screening for chronic epilepsy.
Berdichevsky, Yevgeny; Saponjian, Yero; Park, Kyung-Il; Roach, Bonnie; Pouliot, Wendy; Lu, Kimberly; Swiercz, Waldemar; Dudek, F Edward; Staley, Kevin J
2016-12-01
Current anticonvulsant screening programs are based on seizures evoked in normal animals. One-third of epileptic patients do not respond to the anticonvulsants discovered with these models. We evaluated a tiered program based on chronic epilepsy and spontaneous seizures, with compounds advancing from high-throughput in vitro models to low-throughput in vivo models. Epileptogenesis in organotypic hippocampal slice cultures was quantified by lactate production and lactate dehydrogenase release into culture media as rapid assays for seizure-like activity and cell death, respectively. Compounds that reduced these biochemical measures were retested with in vitro electrophysiological confirmation (i.e., second stage). The third stage involved crossover testing in the kainate model of chronic epilepsy, with blinded analysis of spontaneous seizures after continuous electrographic recordings. We screened 407 compound-concentration combinations. The cyclooxygenase inhibitor, celecoxib, had no effect on seizures evoked in normal brain tissue but demonstrated robust antiseizure activity in all tested models of chronic epilepsy. The use of organotypic hippocampal cultures, where epileptogenesis occurs on a compressed time scale, and where seizure-like activity and seizure-induced cell death can be easily quantified with biomarker assays, allowed us to circumvent the throughput limitations of in vivo chronic epilepsy models. Ability to rapidly screen compounds in a chronic model of epilepsy allowed us to find an anticonvulsant that would be missed by screening in acute models.
Dynamics and Steady States in Excitable Mobile Agent Systems
NASA Astrophysics Data System (ADS)
Peruani, Fernando; Sibona, Gustavo J.
2008-04-01
We study the spreading of excitations in 2D systems of mobile agents where the excitation is transmitted when a quiescent agent keeps contact with an excited one during a nonvanishing time. We show that the steady states strongly depend on the spatial agent dynamics. Moreover, the coupling between exposition time (ω) and agent-agent contact rate (CR) becomes crucial to understand the excitation dynamics, which exhibits three regimes with CR: no excitation for low CR, an excited regime in which the number of quiescent agents (S) is inversely proportional to CR, and, for high CR, a novel third regime, model dependent, where S scales with an exponent ξ-1, with ξ being the scaling exponent of ω with CR.
Langton, Calvin M; Murad, Zuwaina; Humbert, Bianca
2017-04-01
Associations between self-reported coercive sexual behavior against adult females, childhood sexual abuse (CSA), and child-parent attachment styles, as well as attachment with adult romantic partners, were examined among 176 adult community males. Attachment style with each parent and with romantic partners was also investigated as a potential moderator. Using hierarchical multiple regression analysis, avoidant attachment with mothers in childhood (and also with fathers, in a second model) accounted for a significant amount of the variance in coercive sexual behavior controlling for scores on anxious ambivalent and disorganized/disoriented attachment scales, as predicted. Similarly, in a third model, avoidance attachment in adulthood was a significant predictor of coercive sexual behavior controlling for scores on the anxiety attachment in adulthood scale. These main effects for avoidant and avoidance attachment were not statistically significant when CSA and control variables (other types of childhood adversity, aggression, antisociality, and response bias) were added in each of the models. But the interaction between scales for CSA and avoidance attachment in adulthood was significant, demonstrating incremental validity in a final step, consistent with a hypothesized moderating function for attachment in adulthood. The correlation between CSA and coercive sexual behavior was .60 for those with the highest third of avoidance attachment scores (i.e., the most insecurely attached on this scale), .24 for those with scores in the middle range on the scale, and .01 for those with the lowest third of avoidance attachment scores (i.e., the most securely attached). Implications for study design and theory were discussed.
NASA Astrophysics Data System (ADS)
Rosenblatt, Pascal; Bruinsma, Sean; Mueller-Wodarg, Ingo; Haeusler, Bernd
On its highly elliptical 24 hour orbit around Venus, the Venus Express (VEx) spacecraft briefly reaches a pericenter altitude of nominally 250 km. Recently, however, dedicated and intense radio tracking campaigns have taken place in August 2008 (campaign1), October 2009 (cam-paign2), February and April 2010 (campaign3), for which the pericenter altitude was lowered to about 175 km in order to be able to probe the upper atmosphere of Venus above the North Pole for the first time ever in-situ. As the spacecraft experiences atmospheric drag, its trajectory is measurably perturbed during the pericenter pass, allowing us to infer total atmospheric mass density at the pericenter altitude. The GINS software (Géodésie par Intégration Numérique e e Simultanées) is used to accurately reconstruct the orbital motion of VEx through an iterative least-squares fitting process to the Doppler tracking data. The drag acceleration is modelled using an initial atmospheric density model (VTS model, A. Hedin). A drag scale factor is estimated for each pericenter pass, which scales Hedin's density model in order to best fit the radio tracking data. About 20 density scale factors have been obtained mainly from the second and third VExADE campaigns, which indicate a lower density by a factor of about one-third than Hedin's model predicts. These first ever polar density measurements at solar minimum have allowed us to construct a diffusive equilibrium density model for Venus' thermosphere, constrained in the lower thermosphere primarily by SPICAV-SOIR measurements and above 175 km by the VExADE drag measurements. The preliminary results of the VExADE cam-paigns show that it is possible to obtain reliable estimates of Venus' upper atmosphere densities at an altitude of around 175 km. Future VExADE campaigns will benefit from the planned further lowering of VEx pericenter altitude to below 170 Km.
NASA Astrophysics Data System (ADS)
Barberis, Lucas; Peruani, Fernando
2016-12-01
We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.
Barberis, Lucas; Peruani, Fernando
2016-12-09
We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.
Numerical analysis of diffusion around a suspended expressway by a multi-scale CFD model
NASA Astrophysics Data System (ADS)
kondo, Hiroaki; Asahi, Kazutake; Tomizuka, Takayuki; Suzuki, Motoo
The diffusion of NO x around Ikegami-Shinmachi crossroads, which are among the most polluted roadside areas in Japan, was analyzed with a CFD model. This is a suspended four-lane express road with a six-lane ground-level road under the expressway and another four-lane ground-level road intersecting the two roads. Three types of boundary conditions for the CFD model were tested. In the first case, the boundary conditions were given with the results from the mesoscale meteorological model; in other words, the model was multi-scale. In the second case, the boundary conditions were given with the local one-point observation. In the third case, the conditions for the wind were given with the observation, and those for the turbulence were given with the mesoscale numerical model. All of the calculations indicated high concentrations in the morning and low ones in the afternoon, but they did not indicate high concentrations in the evening. The reasons for such time variations of NO x concentrations were investigated from the viewpoints of the wind direction, velocity, and boundary layer height. The results suggested that the extremely high concentration was generated by local sources and advection from the large source area of Tokyo. On the whole, the calculation with the boundary condition with the mesoscale model appears to be better than the other calculations.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-29
... Standard prescribes a full-scale test using a pair of T-shaped gas burners designed to represent burning... Group sought an additional one year for manufacturers to comply with the third party testing requirement... accredited by an ILAC-MRA member at the time of the test. For firewalled conformity assessment bodies, the...
THE THIRD SIGNATURE OF GRANULATION IN BRIGHT-GIANT AND SUPERGIANT STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, David F.; Pugh, Teznie, E-mail: dfgray@uwo.ca
2012-04-15
We investigated third-signature granulation plots for 18 bright giants and supergiants and one giant of spectral classes G0 to M3. These plots reveal the net granulation velocities, averaged over the stellar disk, as a function of depth. Supergiants show significant differences from the 'standard' shape seen for lower-luminosity stars. Most notable is a striking reversal of slope seen for three of the nine supergiants, i.e., stronger lines are more blueshifted than weaker lines, opposite the solar case. Changes in the third-signature plot of {alpha} Sco (M1.5 Iab) with time imply granulation cells that penetrate only the lower portion of themore » photosphere. For those stars showing the standard shape, we derive scaling factors relative to the Sun that serve as a first-order measure of the strength of the granulation relative to the Sun. For G-type stars, the third-signature scale of the bright giants and supergiants is approximately 1.5 times as strong as in dwarfs, but for K stars, there in no discernible difference between higher-luminosity stars and dwarfs. Classical macroturbulence, a measure of the velocity dispersion of the granulation, increases with the third-signature-plot scale factors, but at different rates for different luminosity classes.« less
2011-03-01
efficient partially buoyant cargo airlifters, fuel-efficient hybrid wing- body aircraft, and hyperprecision low-collateral damage munitions [17]. In order to...between the tip and the surface, or between the tip and the small layer of condensed water on the surface [78]. The third method is a continuum model...crystal near the ringing conditions. The second is by applying an alternating voltage to the piezo crystal in the z-direction. The third method is to
Design, building, and testing of the postlanding systems for the assured crew return vehicle
NASA Technical Reports Server (NTRS)
Hosterman, Kenneth C.; Anderson, Loren A.
1991-01-01
The design, building, and testing of the postlanding support systems for a water-landing Assured Crew Return Vehicle (ACRV) are presented. One ACRV will be permanently docked to Space Station Freedom, fulfilling NASA's commitment to Assured Crew Return Capability in the event of an accident or illness. The configuration of the ACRV is based on an Apollo Command Module (ACM) derivative. The 1990-1991 effort concentrated on the design, building, and testing of a one-fifth scale model of the egress and stabilization systems. The objective was to determine the feasibility of (1) stabilizing the ACM out of the range of motions that cause seasickness and (2) the safe and rapid removal of a sick or injured crew member from the ACRV. The development of the ACRV postlanding systems model was performed at the University of Central Florida with guidance from the Kennedy Space Center ACRV program managers. Emphasis was placed on four major areas. First was design and construction of a one-fifth scale model of the ACM derivative to accommodate the egress and stabilization systems for testing. Second was the identification of a water test facility suitable for testing the model in all possible configurations. Third was the construction of the rapid egress mechanism designed in the previous academic year for incorporation into the ACRV model. The fourth area was construction and motion response testing of the attitude ring and underwater parachute systems.
Second order closure modeling of turbulent buoyant wall plumes
NASA Technical Reports Server (NTRS)
Zhu, Gang; Lai, Ming-Chia; Shih, Tsan-Hsing
1992-01-01
Non-intrusive measurements of scalar and momentum transport in turbulent wall plumes, using a combined technique of laser Doppler anemometry and laser-induced fluorescence, has shown some interesting features not present in the free jet or plumes. First, buoyancy-generation of turbulence is shown to be important throughout the flow field. Combined with low-Reynolds-number turbulence and near-wall effect, this may raise the anisotropic turbulence structure beyond the prediction of eddy-viscosity models. Second, the transverse scalar fluxes do not correspond only to the mean scalar gradients, as would be expected from gradient-diffusion modeling. Third, higher-order velocity-scalar correlations which describe turbulent transport phenomena could not be predicted using simple turbulence models. A second-order closure simulation of turbulent adiabatic wall plumes, taking into account the recent progress in scalar transport, near-wall effect and buoyancy, is reported in the current study to compare with the non-intrusive measurements. In spite of the small velocity scale of the wall plumes, the results showed that low-Reynolds-number correction is not critically important to predict the adiabatic cases tested and cannot be applied beyond the maximum velocity location. The mean and turbulent velocity profiles are very closely predicted by the second-order closure models. but the scalar field is less satisfactory, with the scalar fluctuation level underpredicted. Strong intermittency of the low-Reynolds-number flow field is suspected of these discrepancies. The trends in second- and third-order velocity-scalar correlations, which describe turbulent transport phenomena, are also predicted in general, with the cross-streamwise correlations better than the streamwise one. Buoyancy terms modeling the pressure-correlation are shown to improve the prediction slightly. The effects of equilibrium time-scale ratio and boundary condition are also discussed.
ERIC Educational Resources Information Center
Weaver, Christopher
2011-01-01
This study presents a systematic investigation concerning the performance of different rating scales used in the English section of a university entrance examination to assess 1,287 Japanese test takers' ability to write a third-person introduction speech. Although the rating scales did not conform to all of the expectations of the Rasch model,…
Balki, Mrinalini; Hoppe, David; Monks, David; Cooke, Mary Ellen; Sharples, Lynn; Windrim, Rory
2017-06-01
The objective of this study was to develop a new interdisciplinary teamwork scale, the Perinatal Emergency: Team Response Assessment (PETRA), for the management of obstetric crises, through consensus agreement of obstetric caregivers. This prospective study was performed using expert consensus, based on a Delphi method. The study investigators developed a new PETRA tool, specifically related to obstetric crisis management, based on the existing literature and discussions among themselves. The scale was distributed to a selected panel of experts in the field for the Delphi process. After each round of Delphi, every component of the scale was analyzed quantitatively by the percentage of agreement ratings and each comment reviewed by the blinded investigators. The assessment scale was then modified, with components of less than 80% agreement removed from the scale. The process was repeated on three occasions to reach a consensus and final PETRA scale. Fourteen of 24 invited experts participated in the Delphi process. The original PETRA scale included six categories and 48 items, one global scale item, and a 3-point rubric for rating. The overall percentage agreement by experts in the first, second, and third rounds was 95.0%, 93.2%, and 98.5%, respectively. The final scale after the third round of Delphi consisted of the following seven categories: shared mental model, communication, situational awareness, leadership, followership, workload management, and positive/effective behaviours and attitudes. There were 34 individual items within these categories, each with a 5-point rating rubric (1 = unacceptable to 5 = perfect). Using a structured Delphi method, we established the face and content validity of this assessment scale that focuses on important aspects of interdisciplinary teamwork in the management of obstetric crises. Copyright © 2017 The Society of Obstetricians and Gynaecologists of Canada/La Société des obstétriciens et gynécologues du Canada. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2015-01-01
Low-level clouds cover nearly half of the Earth and play a critical role in regulating the energy and hydrological cycle. Despite the fact that a great effort has been put to advance the modeling and observational capability in recent years, low-level clouds remains one of the largest uncertainties in the projection of future climate change. Low-level cloud feedbacks dominate the uncertainty in the total cloud feedback in climate sensitivity and projection studies. These clouds are notoriously difficult to simulate in climate models due to its complicated interactions with aerosols, cloud microphysics, boundary-layer turbulence and cloud dynamics. The biases in both low cloud coverage/water content and cloud radiative effects (CREs) remain large. A simultaneous reduction in both cloud and CRE biases remains elusive. This presentation first reviews the effort of implementing the higher-order turbulence closure (HOC) approach to representing subgrid-scale turbulence and low-level cloud processes in climate models. There are two HOCs that have been implemented in climate models. They differ in how many three-order moments are used. The CLUBB are implemented in both CAM5 and GDFL models, which are compared with IPHOC that is implemented in CAM5 by our group. IPHOC uses three third-order moments while CLUBB only uses one third-order moment while both use a joint double-Gaussian distribution to represent the subgrid-scale variability. Despite that HOC is more physically consistent and produces more realistic low-cloud geographic distributions and transitions between cumulus and stratocumulus regimes, GCMs with traditional cloud parameterizations outperform in CREs because tuning of this type of models is more extensively performed than those with HOCs. We perform several tuning experiments with CAM5 implemented with IPHOC in an attempt to produce the nearly balanced global radiative budgets without deteriorating the low-cloud simulation. One of the issues in CAM5-IPHOC is that cloud water content is much higher than in CAM5, which is combined with higher low-cloud coverage to produce larger shortwave CREs in some low-cloud prevailing regions. Thus, the cloud-radiative feedbacks are exaggerated there. The turning exercise is focused on microphysical parameters, which are also commonly used for tuning in climate models. The results will be discussed in this presentation.
Research Leads: Current Practice, Future Prospects
ERIC Educational Resources Information Center
Riggall, Anna; Singer, Rachel
2015-01-01
This report was conceived as one of three publications that collectively provide a commentary on research awareness and research use within schools in England. This third report in the series presents findings from a small-scale, detailed study of teachers who are operating as their school's Research Lead. The small scale of the study is…
Investigating gender differences in alcohol problems: a latent trait modeling approach.
Nichol, Penny E; Krueger, Robert F; Iacono, William G
2007-05-01
Inconsistent results have been found in research investigating gender differences in alcohol problems. Previous studies of gender differences used a wide range of methodological techniques, as well as limited assortments of alcohol problems. Parents (1,348 men and 1,402 women) of twins enrolled in the Minnesota Twin Family Study answered questions about a wide range of alcohol problems. A latent trait modeling technique was used to evaluate gender differences in the probability of endorsement at the problem level and for the overall 105-problem scale. Of the 34 problems that showed significant gender differences, 29 were more likely to be endorsed by men than women with equivalent overall alcohol problem levels. These male-oriented symptoms included measures of heavy drinking, duration of drinking, tolerance, and acting out behaviors. Nineteen symptoms were denoted for removal to create a scale that favored neither gender in assessment. Significant gender differences were found in approximately one-third of the symptoms assessed and in the overall scale. Further examination of the nature of gender differences in alcohol problem symptoms should be undertaken to investigate whether a gender-neutral scale should be created or if men and women should be assessed with separate criteria for alcohol dependence and abuse.
Network thermodynamic curation of human and yeast genome-scale metabolic models.
Martínez, Verónica S; Quek, Lake-Ee; Nielsen, Lars K
2014-07-15
Genome-scale models are used for an ever-widening range of applications. Although there has been much focus on specifying the stoichiometric matrix, the predictive power of genome-scale models equally depends on reaction directions. Two-thirds of reactions in the two eukaryotic reconstructions Homo sapiens Recon 1 and Yeast 5 are specified as irreversible. However, these specifications are mainly based on biochemical textbooks or on their similarity to other organisms and are rarely underpinned by detailed thermodynamic analysis. In this study, a to our knowledge new workflow combining network-embedded thermodynamic and flux variability analysis was used to evaluate existing irreversibility constraints in Recon 1 and Yeast 5 and to identify new ones. A total of 27 and 16 new irreversible reactions were identified in Recon 1 and Yeast 5, respectively, whereas only four reactions were found with directions incorrectly specified against thermodynamics (three in Yeast 5 and one in Recon 1). The workflow further identified for both models several isolated internal loops that require further curation. The framework also highlighted the need for substrate channeling (in human) and ATP hydrolysis (in yeast) for the essential reaction catalyzed by phosphoribosylaminoimidazole carboxylase in purine metabolism. Finally, the framework highlighted differences in proline metabolism between yeast (cytosolic anabolism and mitochondrial catabolism) and humans (exclusively mitochondrial metabolism). We conclude that network-embedded thermodynamics facilitates the specification and validation of irreversibility constraints in compartmentalized metabolic models, at the same time providing further insight into network properties. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Concepts and methods for describing critical phenomena in fluids
NASA Technical Reports Server (NTRS)
Sengers, J. V.; Sengers, J. M. H. L.
1977-01-01
The predictions of theoretical models for a critical-point phase transistion in fluids, namely the classical equation with third-degree critical isotherm, that with fifth-degree critical isotherm, and the lattice gas, are reviewed. The renormalization group theory of critical phenomena and the hypothesis of universality of critical behavior supported by this theory are discussed as well as the nature of gravity effects and how they affect cricital-region experimentation in fluids. The behavior of the thermodynamic properties and the correlation function is formulated in terms of scaling laws. The predictions of these scaling laws and of the hypothesis of universality of critical behavior are compared with experimental data for one-component fluids and it is indicated how the methods can be extended to describe critical phenomena in fluid mixtures.
Revisiting the universal scaling hypothesis: do all plants respond similarly to aridification?
NASA Astrophysics Data System (ADS)
Caddy-Retalic, S.; McInerney, F. A.; Lowe, A. J.; Prentice, I. C.; Wardle, G. M.
2016-12-01
Our limited understanding of how plants respond to aridification is a major barrier to predicting the future composition and distribution of global flora. Measurement of stable carbon isotope ratios in leaves is an established methodology for detecting water stress. Measuring carbon isotope ratios on aridity gradients has the potential to be used to determine the relative sensitivity of many co-occurring species. By comparing the slopes of the relationship between isotope ratios and mean annual precipitation (MAP) between species and to a common slope for all plants in a region, we can test for consistency in aridity sensitivity between different species, growth forms, local environment and continents. We present data from 1329 individual plants of 204 C3 species collected on two bioclimatic gradients: one in China (145-710mm MAP) and one in South Australia (160-980mm MAP). In examining differences between plants of different types and origins, we test the universal scaling hypothesis postulated by Prentice et al. (2010), which suggests that C3 plants have similar patterns of stomatal adjustment, irrespective of phylogeny and traits, including life form. If universal scaling were supported, plant attributes could be disregarded for the purposes of modeling community and regional ecophysiology. We find that less than a third of tested species conform to the universal scaling model, and postulate a new model of four response modes: regional scaling, biotic homeostasis, insensitive response and contrary response. We discuss potential mechanisms for each response mode and their ecological ramifications. Finally, we consider the broader utility for these data, including environmental monitoring and combining isotope data with species distributions to improve predictive vegetation mapping under future climate change scenarios.
Impossibility of Classically Simulating One-Clean-Qubit Model with Multiplicative Error
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Kobayashi, Hirotada; Morimae, Tomoyuki; Nishimura, Harumichi; Tamate, Shuhei; Tani, Seiichiro
2018-05-01
The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently sampled within a constant multiplicative error unless the polynomial-time hierarchy collapses to the third level [T. Morimae, K. Fujii, and J. F. Fitzsimons, Phys. Rev. Lett. 112, 130502 (2014), 10.1103/PhysRevLett.112.130502]. It was open whether we can keep the no-go result while reducing the number of output qubits from three to one. Here, we solve the open problem affirmatively. We also show that the third-level collapse of the polynomial-time hierarchy can be strengthened to the second-level one. The strengthening of the collapse level from the third to the second also holds for other subuniversal models such as the instantaneous quantum polynomial model [M. Bremner, R. Jozsa, and D. J. Shepherd, Proc. R. Soc. A 467, 459 (2011), 10.1098/rspa.2010.0301] and the boson sampling model [S. Aaronson and A. Arkhipov, STOC 2011, p. 333]. We additionally study the classical simulatability of the one-clean-qubit model with further restrictions on the circuit depth or the gate types.
ERIC Educational Resources Information Center
Reynolds, Arthur J.; Hayakawa, Momoko; Ou, Suh-Ruu; Mondi, Christina F.; Englund, Michelle M.; Candee, Allyson J.; Smerillo, Nicole E.
2017-01-01
We describe the development, implementation, and evaluation of a comprehensive preschool to third grade prevention program for the goals of sustaining services at a large scale. The Midwest Child-Parent Center (CPC) Expansion is a multilevel collaborative school reform model designed to improve school achievement and parental involvement from ages…
ERIC Educational Resources Information Center
Dori, Galit A.; Chelune, Gordon J.
2004-01-01
The Wechsler Adult Intelligence Scale--Third Edition (WAIS-III; D. Wechsler, 1997a) and the Wechsler Memory Scale--Third Edition (WMS-III; D. Wechsler, 1997b) are 2 of the most frequently used measures in psychology and neuropsychology. To facilitate the diagnostic use of these measures in the clinical decision-making process, this article…
Martin, Wilhelmus J J M; Skorpil, Nynke E; Ashton-James, Claire E; Tuinzing, D Bram; Forouzanfar, Tymour
2016-01-01
Previous research has demonstrated the efficacy of using local compression to reduce postoperative pain after third molar surgery. It has been theorized that compression reduces pain intensity through vasoconstriction. The current research tests the veracity of this vasoconstriction hypothesis by testing the impact of local epinephrine (a local vasoconstrictor) versus a control on patients' pain ratings over 7 days following surgery. Fifty patients scheduled for mandibular third molar surgery were randomly assigned to receive one cartridge of Ultracaine DS Forte (the treatment group) or one cartridge of Ultracaine DS (the control group) after surgical removal of the third molar. Participants used the visual analog scale (VAS) to provide daily ratings of pain intensity for 7 days following surgery. In addition, on day 7, the perceived effectiveness of the pain treatment was measured with the global perceived effect (GPE) scale. A quality- of-life questionnaire was also completed. A repeated-measures ANOVA indicated that the treatment group perceived significantly less pain than the control group on days 2 to 7 following surgery. In addition, 77.8% of the treatment group perceived their pain treatment to be successful, while only 69.6% of the control group reported that their pain was reduced successfully by day 7. The results of this study provide an initial proof of concept that epinephrine may have an analgesic effect on the period following third molar surgery. Further research with larger sample sizes is needed to strengthen evidence for the clinical utility of offering localized epinephrine to patients following third molar surgery.
NASA Astrophysics Data System (ADS)
Grundstrom, Erika
2013-01-01
To help students love science more and to help them understand the vast distances that pervade astronomy, we use kinesthetic modeling of the Earth-Moon system using PlayDoh. When coupled with discussion, we found (in a pilot study) that students of all ages (children up through adults) acquired a more accurate mental representation of the Earth-Moon system. During early September 2012, we devised and implemented a curriculum unit that focused on the Earth-Moon system and how that relates to eclipses for six middle-Tennessee 6th grade public school classrooms. For this unit, we used PlayDoh as the kinesthetic modeling tool. First, we evaluated what the students knew about the size and scale prior to this intervention using paper and model pre-tests. Second, we used the PlayDoh to model the Earth-Moon system and when possible, conducted an immediate post-test. The students then engaged with the PlayDoh model to help them understand eclipses. Third, we conducted a one-month-later delayed post-test. One thing to note is that about half of the students had experienced the PlayDoh modeling part of a 5th grade pilot lesson during May 2012 therefore the pre-test acted as a four-month-later delayed post-test for these students. We find, among other things, that students retain relative size information more readily than relative distance information. We also find differences in how consistent students are when trying to translate the size/scale they have in their heads to the different modes of assessment utilized.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.
Future sensitivity to new physics in Bd, Bs, and K mixings
NASA Astrophysics Data System (ADS)
Charles, Jérôme; Descotes-Genon, Sébastien; Ligeti, Zoltan; Monteil, Stéphane; Papucci, Michele; Trabelsi, Karim
2014-02-01
We estimate, in a large class of scenarios, the sensitivity to new physics in Bd and Bs mixings achievable with 50 ab-1 of Belle II and 50 fb-1 of LHCb data. We find that current limits on new physics contributions in both Bd ,s systems can be improved by a factor of ˜5 for all values of the CP-violating phases, corresponding to over a factor of 2 increase in the scale of new physics probed. Assuming the same suppressions by Cabbibo-Kobayashi-Maskawa matrix elements as those of the standard model box diagrams, the scale probed will be about 20 TeV for tree-level new physics contributions, and about 2 TeV for new physics arising at one loop. We also explore the future sensitivity to new physics in K mixing. Implications for generic new physics and for various specific scenarios, such as minimal flavor violation, light third-generation dominated flavor violation, or U(2) flavor models are studied.
The effect of aquatic exercises on primary dysmenorrhoea in nonathlete girls
Rezvani, Saeideh; Taghian, Farzaneh; Valiani, Mahboubeh
2013-01-01
Background: Primary dysmenorrhoea without any specific pelvic disease is one of the common complaints in women`s medicine. The general purpose of this research is to define the effects of 12-week aquatic exercises on nonathletic girls’ primary dysmenorrhoea. Materials and Methods: This quasi-experimental was conducted on 40 nonathletic girls aged 18-25 years. Data gathering tools were: Evaluation form of primary dysmenorrhoea and the pain evaluation tool based on the McGill standard pain questionnaire completed before and after the intervention in 3 months (first, second, and third run). Then, 20 subjects were assigned to aquatic exercise group and the other 20 to control group. The subjects in experimental group did aquatic exercise for three sessions a week for 60 minutes for 12 weeks between two menstruations. Kruskal — Wallis and one way analysis of variance (ANOVA) tests were used to analyze the data. Results: The results of this research indicated that severity and duration of pain decreased after 12 weeks of aquatic exercises. Comparison of the two groups showed a significant difference in pain intensity based on visual analogue scale (VAS) scale after these exercises (first, second, and third runs). Present pain intensity (PPI) scale after these exercises (second and third runs) showed a significant difference. Comparison of the two groups showed a significant difference in length of pain after these exercises (third run). Conclusions: Totally, the findings of the present study showed that 12-week regular aquatic exercises are effective on decrease of the severity of the symptoms of primary dysmenorrhoea. PMID:24403940
Three issues on spatial scaling in hydrological processes (Invited)
NASA Astrophysics Data System (ADS)
Rinaldo, A.; Bertuzzo, E.; Rodriguez-Iturbe, I.; Schaefli, B.
2013-12-01
The talk will address a few issues (either open or fully addressed) on the spatial scaling in hydrological processes relevant to catchment-scale transport phenomena and largely reflecting the scaling features observed ubiquitously in the geometry and topology of river basins. Three issues have recently caught the authors' attention. One deals with the signatures of catchment geomorphology on base flow recession curves. The talk will discuss the geomorphic origins of recession curves by linking the time-varying recession of saturated channel sites with the classic Brutsaert parametrization of recession events (in particular, by assimilating two scaling exponents, β and b i.e. |dQ/dt|∝Q^β where Q is at-a-station gauged flow rate; N(l) ∝ G(l)^b where l is the downstream distance from the channel heads receding in time, N(l) is the number of draining channel reaches located at distance l from their heads, and G(l) is the total active drainage network length at a distance greater or equal to l). The role of scaling cutoffs dictated by heterogeneous local drainage densities will be discussed. Second, the scaling of mean catchment travel times with total contributing area will be investigated as a byproduct of the features of channeled and unchanneled distances from any catchment site to the outlet. Third, we shall examine the emergence of evenly spaced ridges and valleys, and the embedded lack of scaling properties implied by a fundamental topographic wavelength. The issue is of particular theoretical importance as the ridge-valley wavelength can be predicted from erosional mechanics. Notably, we recall that the nonlinear model which describes the evolution of a landscape under the effects of erosion and regeneration by geologic uplift can be exactly derived by reparametrization invariance arguments and exactly solved in one dimension. Results of numerical simulations show that the model is indeed able to reproduce the critical scaling characterizing landscapes associated with natural river basins. Specifically, the distribution of the distances between tributaries of a given size (or of sizes larger than a given area) draining along either an open boundary or the mainstream of a river network is analyzed for several landscape types. By proposing a description of the distance separating prescribed merging contributing areas, we also address the scaling of related variables like mean (or bankfull) flow rates and channel and riparian area widths, which are derived under a set of reasonable hydrologic assumptions. We explore the consequences on the third problem of exact theoretical arguments explicitly using the alongstream distribution of confluences carrying a given flow i.e. the general probabilistic structure of tributaries in river networks.
Solomon, Nancy Pearl; Dietsch, Angela M; Dietrich-Burns, Katie E; Styrmisdottir, Edda L; Armao, Christopher S
2016-05-01
This report describes the development and preliminary analysis of a database for traumatically injured military service members with dysphagia. A multidimensional database was developed to capture clinical variables related to swallowing. Data were derived from clinical records and instrumental swallow studies, and ranged from demographics, injury characteristics, swallowing biomechanics, medications, and standardized tools (e.g., Glasgow Coma Scale, Penetration-Aspiration Scale). Bayesian Belief Network modeling was used to analyze the data at intermediate points, guide data collection, and predict outcomes. Predictive models were validated with independent data via receiver operating characteristic curves. The first iteration of the model (n = 48) revealed variables that could be collapsed for the second model (n = 96). The ability to predict recovery from dysphagia improved from the second to third models (area under the curve = 0.68 to 0.86). The third model, based on 161 cases, revealed "initial diet restrictions" as first-degree, and "Glasgow Coma Scale, intubation history, and diet change" as second-degree associates for diet restrictions at discharge. This project demonstrates the potential for bioinformatics to advance understanding of dysphagia. This database in concert with Bayesian Belief Network modeling makes it possible to explore predictive relationships between injuries and swallowing function, individual variability in recovery, and appropriate treatment options. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-06-15
Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less
A third-order moving mesh cell-centered scheme for one-dimensional elastic-plastic flows
NASA Astrophysics Data System (ADS)
Cheng, Jun-Bo; Huang, Weizhang; Jiang, Song; Tian, Baolin
2017-11-01
A third-order moving mesh cell-centered scheme without the remapping of physical variables is developed for the numerical solution of one-dimensional elastic-plastic flows with the Mie-Grüneisen equation of state, the Wilkins constitutive model, and the von Mises yielding criterion. The scheme combines the Lagrangian method with the MMPDE moving mesh method and adaptively moves the mesh to better resolve shock and other types of waves while preventing the mesh from crossing and tangling. It can be viewed as a direct arbitrarily Lagrangian-Eulerian method but can also be degenerated to a purely Lagrangian scheme. It treats the relative velocity of the fluid with respect to the mesh as constant in time between time steps, which allows high-order approximation of free boundaries. A time dependent scaling is used in the monitor function to avoid possible sudden movement of the mesh points due to the creation or diminishing of shock and rarefaction waves or the steepening of those waves. A two-rarefaction Riemann solver with elastic waves is employed to compute the Godunov values of the density, pressure, velocity, and deviatoric stress at cell interfaces. Numerical results are presented for three examples. The third-order convergence of the scheme and its ability to concentrate mesh points around shock and elastic rarefaction waves are demonstrated. The obtained numerical results are in good agreement with those in literature. The new scheme is also shown to be more accurate in resolving shock and rarefaction waves than an existing third-order cell-centered Lagrangian scheme.
Using Analogies to Assess Student Learning
ERIC Educational Resources Information Center
Bentley, Callan
2008-01-01
One of the most powerful pieces of knowledge that students can gain from the study of geology is an understanding of the immense scale of geologic time. In the author's introductory-level physical geology course at Northern Virginia Community College, they discuss geologic time about one-third of the way through the semester, after a thorough…
Global-scale hydrological response to future glacier mass loss
NASA Astrophysics Data System (ADS)
Huss, Matthias; Hock, Regine
2018-01-01
Worldwide glacier retreat and associated future runoff changes raise major concerns over the sustainability of global water resources1-4, but global-scale assessments of glacier decline and the resulting hydrological consequences are scarce5,6. Here we compute global glacier runoff changes for 56 large-scale glacierized drainage basins to 2100 and analyse the glacial impact on streamflow. In roughly half of the investigated basins, the modelled annual glacier runoff continues to rise until a maximum (`peak water') is reached, beyond which runoff steadily declines. In the remaining basins, this tipping point has already been passed. Peak water occurs later in basins with larger glaciers and higher ice-cover fractions. Typically, future glacier runoff increases in early summer but decreases in late summer. Although most of the 56 basins have less than 2% ice coverage, by 2100 one-third of them might experience runoff decreases greater than 10% due to glacier mass loss in at least one month of the melt season, with the largest reductions in central Asia and the Andes. We conclude that, even in large-scale basins with minimal ice-cover fraction, the downstream hydrological effects of continued glacier wastage can be substantial, but the magnitudes vary greatly among basins and throughout the melt season.
Reducing the convective losses of cavity receivers
NASA Astrophysics Data System (ADS)
Flesch, Robert; Grobbel, Johannes; Stadler, Hannes; Uhlig, Ralf; Hoffschmidt, Bernhard
2016-05-01
Convective losses reduce the efficiency of cavity receivers used in solar power towers especially under windy conditions. Therefore, measures should be taken to reduce these losses. In this paper two different measures are analyzed: an air curtain and a partial window which covers one third of the aperture opening. The cavity without modifications and the usage of a partial window were analyzed in a cryogenic wind tunnel at -173°C. The cryogenic environment allows transforming the results from the small model cavity to a large scale receiver with Gr≈3.9.1010. The cavity with the two modifications in the wind tunnel environment was analyzed with a CFD model as well. By comparing the numerical and experimental results the model was validated. Both modifications are capable of reducing the convection losses. In the best case a reduction of about 50 % was achieved.
Chemical boundary conditions are a key input to regional-scale photochemical models. In this study, performed during the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3), we perform annual simulations over North America with chemical boundary con...
Model with a gauged lepton flavor SU(2) symmetry
NASA Astrophysics Data System (ADS)
Chiang, Cheng-Wei; Tsumura, Koji
2018-05-01
We propose a model having a gauged SU(2) symmetry associated with the second and third generations of leptons, dubbed SU(2) μτ , of which U{(1)}_{L_{μ }-L_{τ }} is an Abelian subgroup. In addition to the Standard Model fields, we introduce two types of scalar fields. One exotic scalar field is an SU(2) μτ doublet and SM singlet that develops a nonzero vacuum expectation value at presumably multi-TeV scale to completely break the SU(2) μτ symmetry, rendering three massive gauge bosons. At the same time, the other exotic scalar field, carrying electroweak as well as SU(2) μτ charges, is induced to have a nonzero vacuum expectation value as well and breaks mass degeneracy between the muon and tau. We examine how the new particles in the model contribute to the muon anomalous magnetic moment in the parameter space compliant with the Michel decays of tau.
Inter-rater agreement on PIVC-associated phlebitis signs, symptoms and scales.
Marsh, Nicole; Mihala, Gabor; Ray-Barruel, Gillian; Webster, Joan; Wallis, Marianne C; Rickard, Claire M
2015-10-01
Many peripheral intravenous catheter (PIVC) infusion phlebitis scales and definitions are used internationally, although no existing scale has demonstrated comprehensive reliability and validity. We examined inter-rater agreement between registered nurses on signs, symptoms and scales commonly used in phlebitis assessment. Seven PIVC-associated phlebitis signs/symptoms (pain, tenderness, swelling, erythema, palpable venous cord, purulent discharge and warmth) were observed daily by two raters (a research nurse and registered nurse). These data were modelled into phlebitis scores using 10 different tools. Proportions of agreement (e.g. positive, negative), observed and expected agreements, Cohen's kappa, the maximum achievable kappa, prevalence- and bias-adjusted kappa were calculated. Two hundred ten patients were recruited across three hospitals, with 247 sets of paired observations undertaken. The second rater was blinded to the first's findings. The Catney and Rittenberg scales were the most sensitive (phlebitis in >20% of observations), whereas the Curran, Lanbeck and Rickard scales were the most restrictive (≤2% phlebitis). Only tenderness and the Catney (one of pain, tenderness, erythema or palpable cord) and Rittenberg scales (one of erythema, swelling, tenderness or pain) had acceptable (more than two-thirds, 66.7%) levels of inter-rater agreement. Inter-rater agreement for phlebitis assessment signs/symptoms and scales is low. This likely contributes to the high degree of variability in phlebitis rates in literature. We recommend further research into assessment of infrequent signs/symptoms and the Catney or Rittenberg scales. New approaches to evaluating vein irritation that are valid, reliable and based on their ability to predict complications need exploration. © 2015 John Wiley & Sons, Ltd.
Advanced and innovative wind energy concept development: Dynamic inducer system
NASA Astrophysics Data System (ADS)
Lissaman, P. B. S.; Zalay, A. D.; Hibbs, B. H.
1981-05-01
The performance benefits of the dynamic inducer tip vane system was demonstrated Tow-tests conducted on a three-bladed, 3.6-meter diameter rotor show that a dynamic inducer can achieve a power coefficient (based pon power blade swept area) of 0.5, which exceeds that of a plain rotor by about 35%. Wind tunnel tests conducted on a one-third scale model of the dynamic inducer achieved a power coefficient of 0.62 which exceeded that of a plain rotor by about 70%. The dynamic inducer substantially improves the performance of conventional rotors and indications are that higher power coefficients can be achieved through additional aerodynamic optimization.
Tackett, Sean; Bakar, Hamidah Abu; Shilkofski, Nicole A; Coady, Niamh; Rampal, Krishna; Wright, Scott
2015-01-01
While a strong learning environment is critical to medical student education, the assessment of medical school learning environments has confounded researchers. Our goal was to assess the validity and utility of the Johns Hopkins Learning Environment Scale (JHLES) for preclinical students at three Malaysian medical schools with distinct educational and institutional models. Two schools were new international partnerships, and the third was school leaver program established without international partnership. First- and second-year students responded anonymously to surveys at the end of the academic year. The surveys included the JHLES, a 28-item survey using five-point Likert scale response options, the Dundee Ready Educational Environment Measure (DREEM), the most widely used method to assess learning environments internationally, a personal growth scale, and single-item global learning environment assessment variables. The overall response rate was 369/429 (86%). After adjusting for the medical school year, gender, and ethnicity of the respondents, the JHLES detected differences across institutions in four out of seven domains (57%), with each school having a unique domain profile. The DREEM detected differences in one out of five categories (20%). The JHLES was more strongly correlated than the DREEM to two thirds of the single-item variables and the personal growth scale. The JHLES showed high internal reliability for the total score (α=0.92) and the seven domains (α, 0.56-0.85). The JHLES detected variation between learning environment domains across three educational settings, thereby creating unique learning environment profiles. Interpretation of these profiles may allow schools to understand how they are currently supporting trainees and identify areas needing attention.
Tackett, Sean; Bakar, Hamidah Abu; Shilkofski, Nicole A.; Coady, Niamh; Rampal, Krishna; Wright, Scott
2015-01-01
Purpose: While a strong learning environment is critical to medical student education, the assessment of medical school learning environments has confounded researchers. Our goal was to assess the validity and utility of the Johns Hopkins Learning Environment Scale (JHLES) for preclinical students at three Malaysian medical schools with distinct educational and institutional models. Two schools were new international partnerships, and the third was school leaver program established without international partnership. Methods: First- and second-year students responded anonymously to surveys at the end of the academic year. The surveys included the JHLES, a 28-item survey using five-point Likert scale response options, the Dundee Ready Educational Environment Measure (DREEM), the most widely used method to assess learning environments internationally, a personal growth scale, and single-item global learning environment assessment variables. Results: The overall response rate was 369/429 (86%). After adjusting for the medical school year, gender, and ethnicity of the respondents, the JHLES detected differences across institutions in four out of seven domains (57%), with each school having a unique domain profile. The DREEM detected differences in one out of five categories (20%). The JHLES was more strongly correlated than the DREEM to two thirds of the single-item variables and the personal growth scale. The JHLES showed high internal reliability for the total score (α=0.92) and the seven domains (α, 0.56-0.85). Conclusion: The JHLES detected variation between learning environment domains across three educational settings, thereby creating unique learning environment profiles. Interpretation of these profiles may allow schools to understand how they are currently supporting trainees and identify areas needing attention. PMID:26165949
NASA Astrophysics Data System (ADS)
Philippart, Catharina J. M.; Amaral, Ana; Asmus, Ragnhild; van Bleijswijk, Judith; Bremner, Julie; Buchholz, Fred; Cabanellas-Reboredo, Miguel; Catarino, Diana; Cattrijsse, André; Charles, François; Comtet, Thierry; Cunha, Alexandra; Deudero, Salud; Duchêne, Jean-Claude; Fraschetti, Simonetta; Gentil, Franck; Gittenberger, Arjan; Guizien, Katell; Gonçalves, João M.; Guarnieri, Giuseppe; Hendriks, Iris; Hussel, Birgit; Vieira, Raquel Pinheiro; Reijnen, Bastian T.; Sampaio, Iris; Serrao, Ester; Pinto, Isabel Sousa; Thiebaut, Eric; Viard, Frédérique; Zuur, Alain F.
2012-08-01
Reproductive cycles of marine invertebrates with complex life histories are considered to be synchronized by water temperature and feeding conditions, which vary with season and latitude. This study analyses seasonal variation in the occurrence of oyster (Crassostrea gigas) and mussel (Mytilus edulis/galloprovincialis) larvae across European coastal waters at a synoptic scale (1000s of km) using standardised methods for sampling and molecular analyses. We tested a series of hypotheses to explain the observed seasonal patterns of occurrence of bivalve larvae at 12 European stations (located between 37°N and 60°N and 27°W and 18°E). These hypotheses included a model that stated that there was no synchronisation in seasonality of larval presence at all between the locations (null hypothesis), a model that assumed that there was one common seasonality pattern for all stations within Europe, and various models that supposed that the variation in seasonality could be grouped according to specific spatial scales (i.e., latitude, large marine ecosystems and ecoregions), taxonomic groups, or several combinations of these factors. For oysters, the best models explaining the presence/absence of larvae in European coastal waters were (1) the model that assumed one common seasonal pattern, and (2) the one that, in addition to this common pattern, assumed an enhanced probability of occurrence from south to north. The third best model for oysters, with less empirical support than the first two, stated that oysters reproduced later in the south than in the north. For mussels, the best models explaining the seasonality in occurrence of larvae were (1) the model that assumed four underlying trends related to large marine ecosystems, and (2) the one that assumed one common seasonal pattern for larvae occurrence throughout Europe. Such synchronies in larval occurrences suggest that environmental conditions relevant to bivalve larval survival are more or less similar at large spatial scales from 100s to 1000s of km. To unravel the underlying mechanisms for this synchronisation is of particular interest in the light of changing environmental conditions as the result of global climate change and the possible consequences for marine food webs and ecosystem services.
Identifying bird and reptile vulnerabilities to climate change in the southwestern United States
Hatten, James R.; Giermakowski, J. Tomasz; Holmes, Jennifer A.; Nowak, Erika M.; Johnson, Matthew J.; Ironside, Kirsten E.; van Riper, Charles; Peters, Michael; Truettner, Charles; Cole, Kenneth L.
2016-07-06
Current and future breeding ranges of 15 bird and 16 reptile species were modeled in the Southwestern United States. Rather than taking a broad-scale, vulnerability-assessment approach, we created a species distribution model (SDM) for each focal species incorporating climatic, landscape, and plant variables. Baseline climate (1940–2009) was characterized with Parameter-elevation Regressions on Independent Slopes Model (PRISM) data and future climate with global-circulation-model data under an A1B emission scenario. Climatic variables included monthly and seasonal temperature and precipitation; landscape variables included terrain ruggedness, soil type, and insolation; and plant variables included trees and shrubs commonly associated with a focal species. Not all species-distribution models contained a plant, but if they did, we included a built-in annual migration rate for more accurate plant-range projections in 2039 or 2099. We conducted a group meta-analysis to (1) determine how influential each variable class was when averaged across all species distribution models (birds or reptiles), and (2) identify the correlation among contemporary (2009) habitat fragmentation and biological attributes and future range projections (2039 or 2099). Projected changes in bird and reptile ranges varied widely among species, with one-third of the ranges predicted to expand and two-thirds predicted to contract. A group meta-analysis indicated that climatic variables were the most influential variable class when averaged across all models for both groups, followed by landscape and plant variables (birds), or plant and landscape variables (reptiles), respectively. The second part of the meta-analysis indicated that numerous contemporary habitat-fragmentation (for example, patch isolation) and biological-attribute (for example, clutch size, longevity) variables were significantly correlated with the magnitude of projected range changes for birds and reptiles. Patch isolation was a significant trans-specific driver of projected bird and reptile ranges, suggesting that strategic actions should focus on restoration and enhancement of habitat at local and regional scales to promote landscape connectivity and conservation of core areas.
Weaver, Christopher
2011-01-01
This study presents a systematic investigation concerning the performance of different rating scales used in the English section of a university entrance examination to assess 1,287 Japanese test takers' ability to write a third-person introduction speech. Although the rating scales did not conform to all of the expectations of the Rasch model, they successfully defined a meaningful continuum of English communicative competence. In some cases, the expectations of the Rasch model needed to be weighed against the specific assessment needs of the university entrance examination. This investigation also found that the degree of compatibility between the number of points allotted to the different rating scales and the various requirements of an introduction speech played a considerable role in determining the extent to which the different rating scales conformed to the expectations of the Rasch model. Compatibility thus becomes an important factor to consider for optimal rating scale performance.
v Ballestrem, C-L; Strauss, M; Kächele, H
2005-05-01
Using a longitudinal screening model, 772 mothers were screened for postnatal depression after delivery in Stuttgart (Germany). This model contained the Edinburgh Postnatal Depression Scale (EPDS) and the Hamilton Depression Scale (HAMD). The first screening was 6-8 weeks after delivery with the EPDS. Mothers with high scores in the first screening had a second screening 9-12 weeks after delivery with the EPDS at least three weeks after the first. Mothers with high scores in both screenings were investigated with the Hamilton Depression Scale (HAMD). Classification was performed with the DSM-IV. After observation until the third month after delivery, 3.6% (N = 28) of the 772 mothers were diagnosed with postnatal depression. Various methods of therapy were offered to those mothers. 18% (N = 5) accepted one or more of these methods of treatment. The rest of the mothers with postnatal depression refused--mostly for attitudinal or practical reasons. 13.4% of the mothers showed high scores in the first screening but not in the second. For those mothers a longitudinal observation is currently being performed to distinguish between a depressive episode and a depression with oscillating symptoms.
Application of regional climate models to the Indian winter monsoon over the western Himalayas.
Dimri, A P; Yasunari, T; Wiltshire, A; Kumar, P; Mathison, C; Ridley, J; Jacob, D
2013-12-01
The Himalayan region is characterized by pronounced topographic heterogeneity and land use variability from west to east, with a large variation in regional climate patterns. Over the western part of the region, almost one-third of the annual precipitation is received in winter during cyclonic storms embedded in westerlies, known locally as the western disturbance. In the present paper, the regional winter climate over the western Himalayas is analyzed from simulations produced by two regional climate models (RCMs) forced with large-scale fields from ERA-Interim. The analysis was conducted by the composition of contrasting (wet and dry) winter precipitation years. The findings showed that RCMs could simulate the regional climate of the western Himalayas and represent the atmospheric circulation during extreme precipitation years in accordance with observations. The results suggest the important role of topography in moisture fluxes, transport and vertical flows. Dynamical downscaling with RCMs represented regional climates at the mountain or even event scale. However, uncertainties of precipitation scale and liquid-solid precipitation ratios within RCMs are still large for the purposes of hydrological and glaciological studies. Copyright © 2013 Elsevier B.V. All rights reserved.
Hazard from far-field tsunami at Hilo: Earthquakes from the Ring of Fire
NASA Astrophysics Data System (ADS)
Arcas, D.; Weiss, R.; Titov, V.
2007-12-01
Historical data and modeling are used to study tsunami hazard at Hilo, Hawaii. Hilo has one of the best historical tsunami record in the US. Considering the tsunami observations from the early eighteen hundreds until today reveals that the number of observed events per decade depends on the awareness of tsunami events. The awareness appears to be a function of the observation techniques such as seismometers and communication devices, as well as direct measurements. Three time periods can be identified, in which the number of observed events increases from one event per decade in the first period to 7.7 in the second, to 9.4 events per decade in the third one. A total of 89 events from far-field sources have been encountered. In contrast only 11 events have been observed with sources in the near field. To remove this historical observation bias from the hazard estimate, we have complimented the historical analysis with a modeling study. We have carried out modeling of 1476 individual earthquakes along the subduction zones of the Pacific Ocean in four different magnitude levels (7.5, 8.2, 8.7 and 9.3). The maximum run up and maximum peak at the tide gauge is plotted for the different magnitude levels to reveal sensitive and source areas of tsunami waves for Hilo and a linear scaling of both parameters for small, but non-linear scaling for larger earthquakes
NASA Astrophysics Data System (ADS)
Bon, Edi; Jovanović, Predrag; Marziani, Paola; Bon, Nataša; Otašević, Aleksandar
2018-06-01
Here we investigate the connection of broad emission line shapes and continuum light curve variability time scales of type-1 Active Galactic Nuclei (AGN). We developed a new model to describe optical broad emission lines as an accretion disk model of a line profile with additional ring emission. We connect ring radii with orbital time scales derived from optical light curves, and using Kepler's third law, we calculate mass of central supermassive black hole (SMBH). The obtained results for central black hole masses are in a good agreement with other methods. This indicates that the variability time scales of AGN may not be stochastic, but rather connected to the orbital time scales which depend on the central SMBH mass.
Quantifying the degradation of organic matter in marine sediments: A review and synthesis
NASA Astrophysics Data System (ADS)
Arndt, Sandra; Jørgensen, B. B.; LaRowe, D. E.; Middelburg, J. J.; Pancost, R. D.; Regnier, P.
2013-08-01
Quantifying the rates of biogeochemical processes in marine sediments is essential for understanding global element cycles and climate change. Because organic matter degradation is the engine behind benthic dynamics, deciphering the impact that various forces have on this process is central to determining the evolution of the Earth system. Therefore, recent developments in the quantitative modeling of organic matter degradation in marine sediments are critically reviewed. The first part of the review synthesizes the main chemical, biological and physical factors that control organic matter degradation in sediments while the second part provides a general review of the mathematical formulations used to model these processes and the third part evaluates their application over different spatial and temporal scales. Key transport mechanisms in sedimentary environments are summarized and the mathematical formulation of the organic matter degradation rate law is described in detail. The roles of enzyme kinetics, bioenergetics, temperature and biomass growth in particular are highlighted. Alternative model approaches that quantify the degradation rate constant are also critically compared. In the third part of the review, the capability of different model approaches to extrapolate organic matter degradation rates over a broad range of temporal and spatial scales is assessed. In addition, the structure, functions and parameterization of more than 250 published models of organic matter degradation in marine sediments are analyzed. The large range of published model parameters illustrates the complex nature of organic matter dynamics, and, thus, the limited transferability of these parameters from one site to another. Compiled model parameters do not reveal a statistically significant correlation with single environmental characteristics such as water depth, deposition rate or organic matter flux. The lack of a generic framework that allows for model parameters to be constrained in data-poor areas seriously limits the quantification of organic matter degradation on a global scale. Therefore, we explore regional patterns that emerge from the compiled more than 250 organic matter rate constants and critically discuss them in their environmental context. This review provides an interdisciplinary view on organic matter degradation in marine sediments. It contributes to an improved understanding of global patterns in benthic organic matter degradation, and helps identify outstanding questions and future directions in the modeling of organic matter degradation in marine sediments.
Static performance and noise tests on a thrust reverser for an augmentor wing aircraft
NASA Technical Reports Server (NTRS)
Harkonen, D. L.; Marrs, C. C.; Okeefe, J. V.
1974-01-01
A 1/3 scale model static test program was conducted to measure the noise levels and reverse thrust performance characteristics of wing-mounted thrust reverser that could be used on an advanced augmentor wing airplane. The configuration tested represents only the most fundamental designs where installation and packaging restraints are not considered. The thrust reverser performance is presented in terms of horizontal, vertical, and resultant effectiveness ratios and the reverser noise is compared on the basis of peak perceived noise level (PNL) and one-third octave band data (OASPL). From an analysis of the model force and acoustic data, an assessment is made on the stopping distance versus noise for a 90,900 kg (200,000 lb) airplane using this type of thrust reverser.
Economics of Utility Scale Photovoltaics at Purdue University
NASA Astrophysics Data System (ADS)
Arnett, William
The research for this case study shows that utility scale solar photovoltaics has become a competitive energy investment option, even when a campus operates a power plant at low electricity rates. To evaluate this an economic model called SEEMS (Solar Economic Evaluation Modelling Spreadsheets) was developed to evaluate a number of financial scenarios in Real Time Pricing for universities. The three main financing structures considered are 1) land leasing, 2) university direct purchase, and 3) third party purchase. Unlike other commercially available models SEEMS specifically accounts for real time pricing, where the local utility provides electricity at an hourly rate that changes with the expected demand. In addition, SEEMS also includes a random simulation that allows the model to predict the likelihood of success for a given solar installation strategy. The research showed that there are several options for utility scale solar that are financially attractive. The most practical financing structure is with a third party partnership because of the opportunity to take advantage of tax incentives. Other options could become more attractive if non-financial benefits are considered. The case study for this research, Purdue University, has a unique opportunity to integrate utility-scale solar electricity into its strategic planning. Currently Purdue is updating its master plan which will define how land is developed. Purdue is also developing a sustainability plan that will define long term environmental goals. In addition, the university is developing over 500 acres of land west of campus as part of its Aerospace Innovation District. This research helps make the case for including utility-scale solar electricity as part of the university's strategic planning.
NASA Astrophysics Data System (ADS)
Xu, Kuan-Man; Cheng, Anning
2014-05-01
A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three models (CAM5, CAM5-IPHOC and SPCAM-IPHOC), with emphasis on low-level clouds and precipitation. Detailed comparisons of scatter diagrams among the monthly-mean low-level cloudiness, PBL height, surface relative humidity and lower tropospheric stability (LTS) reveal the relative strengths and weaknesses for five coastal low-cloud regions among the three models. Observations from CloudSat and CALIPSO and ECMWF Interim reanalysis are used as the truths for the comparisons. We found that the standard CAM5 underestimates cloudiness and produces small cloud fractions at low PBL heights that contradict with observations. CAM5-IPHOC tends to overestimate low clouds but the ranges of LTS and PBL height variations are most realistic. SPCAM-IPHOC seems to produce most realistic results with relatively consistent results from one region to another. Further comparisons with other atmospheric environmental variables will be helpful to reveal the causes of model deficiencies so that SPCAM-IPHOC results will provide guidance to the other two models.
Lergetporer, Philipp; Angerer, Silvia; Glätzle-Rützler, Daniela; Sutter, Matthias
2014-05-13
The human ability to establish cooperation, even in large groups of genetically unrelated strangers, depends upon the enforcement of cooperation norms. Third-party punishment is one important factor to explain high levels of cooperation among humans, although it is still somewhat disputed whether other animal species also use this mechanism for promoting cooperation. We study the effectiveness of third-party punishment to increase children's cooperative behavior in a large-scale cooperation game. Based on an experiment with 1,120 children, aged 7 to 11 y, we find that the threat of third-party punishment more than doubles cooperation rates, despite the fact that children are rarely willing to execute costly punishment. We can show that the higher cooperation levels with third-party punishment are driven by two components. First, cooperation is a rational (expected payoff-maximizing) response to incorrect beliefs about the punishment behavior of third parties. Second, cooperation is a conditionally cooperative reaction to correct beliefs that third party punishment will increase a partner's level of cooperation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Suresh C.; Malik, Pratibha
2015-04-15
The excitation of terahertz (THz) plasmons by a pre-bunched relativistic electron beam propagating in a parallel plane semiconducting guiding system is studied. It is found that the n-InSb semiconductor strongly supports the confined surface plasmons in the terahertz frequency range. The growth rate and efficiency of the THz surface plasmons increase linearly with modulation index and show the largest value as modulation index approaches unity. Moreover, the growth rate of the instability scales as one-third power of the beam density and inverse one-third power of the THz radiation frequency.
On hydrostatic flows in isentropic coordinates
NASA Astrophysics Data System (ADS)
Bokhove, Onno
2000-01-01
The hydrostatic primitive equations of motion which have been used in large-scale weather prediction and climate modelling over the last few decades are analysed with variational methods in an isentropic Eulerian framework. The use of material isentropic coordinates for the Eulerian hydrostatic equations is known to have distinct conceptual advantages since fluid motion is, under inviscid and statically stable circumstances, confined to take place on quasi-horizontal isentropic surfaces. First, an Eulerian isentropic Hamilton's principle, expressed in terms of fluid parcel variables, is therefore derived by transformation of a Lagrangian Hamilton's principle to an Eulerian one. This Eulerian principle explicitly describes the boundary dynamics of the time-dependent domain in terms of advection of boundary isentropes sB; these are the values the isentropes have at their intersection with the (lower) boundary. A partial Legendre transform for only the interior variables yields an Eulerian ‘action’ principle. Secondly, Noether's theorem is used to derive energy and potential vorticity conservation from the Eulerian Hamilton's principle. Thirdly, these conservation laws are used to derive a wave-activity invariant which is second-order in terms of small-amplitude disturbances relative to a resting or moving basic state. Linear stability criteria are derived but only for resting basic states. In mid-latitudes a time- scale separation between gravity and vortical modes occurs. Finally, this time-scale separation suggests that conservative geostrophic and ageostrophic approximations can be made to the Eulerian action principle for hydrostatic flows. Approximations to Eulerian variational principles may be more advantageous than approximations to Lagrangian ones because non-dimensionalization and scaling tend to be based on Eulerian estimates of the characteristic scales involved. These approximations to the stratified hydrostatic formulation extend previous approximations to the shallow- water equations. An explicit variational derivation is given of an isentropic version of Hoskins & Bretherton's model for atmospheric fronts.
Woelmer, Whitney; Kao, Yu-Chun; Bunnell, David B.; Deines, Andrew M.; Bennion, David; Rogers, Mark W.; Brooks, Colin N.; Sayers, Michael J.; Banach, David M.; Grimm, Amanda G.; Shuchman, Robert A.
2016-01-01
Prediction of primary production of lentic water bodies (i.e., lakes and reservoirs) is valuable to researchers and resource managers alike, but is very rarely done at the global scale. With the development of remote sensing technologies, it is now feasible to gather large amounts of data across the world, including understudied and remote regions. To determine which factors were most important in explaining the variation of chlorophyll a (Chl-a), an indicator of primary production in water bodies, at global and regional scales, we first developed a geospatial database of 227 water bodies and watersheds with corresponding Chl-a, nutrient, hydrogeomorphic, and climate data. Then we used a generalized additive modeling approach and developed model selection criteria to select models that most parsimoniously related Chl-a to predictor variables for all 227 water bodies and for 51 lakes in the Laurentian Great Lakes region in the data set. Our best global model contained two hydrogeomorphic variables (water body surface area and the ratio of watershed to water body surface area) and a climate variable (average temperature in the warmest model selection criteria to select models that most parsimoniously related Chl-a to predictor variables quarter) and explained ~ 30% of variation in Chl-a. Our regional model contained one hydrogeomorphic variable (flow accumulation) and the same climate variable, but explained substantially more variation (58%). Our results indicate that a regional approach to watershed modeling may be more informative to predicting Chl-a, and that nearly a third of global variability in Chl-a may be explained using hydrogeomorphic and climate variables.
Flow topologies and turbulence scales in a jet-in-cross-flow
Oefelein, Joseph C.; Ruiz, Anthony M.; Lacaze, Guilhem
2015-04-03
This study presents a detailed analysis of the flow topologies and turbulence scales in the jet-in-cross-flow experiment of [Su and Mungal JFM 2004]. The analysis is performed using the Large Eddy Simulation (LES) technique with a highly resolved grid and time-step and well controlled boundary conditions. This enables quantitative agreement with the first and second moments of turbulence statistics measured in the experiment. LES is used to perform the analysis since experimental measurements of time-resolved 3D fields are still in their infancy and because sampling periods are generally limited with direct numerical simulation. A major focal point is the comprehensivemore » characterization of the turbulence scales and their evolution. Time-resolved probes are used with long sampling periods to obtain maps of the integral scales, Taylor microscales, and turbulent kinetic energy spectra. Scalar-fluctuation scales are also quantified. In the near-field, coherent structures are clearly identified, both in physical and spectral space. Along the jet centerline, turbulence scales grow according to a classical one-third power law. However, the derived maps of turbulence scales reveal strong inhomogeneities in the flow. From the modeling perspective, these insights are useful to design optimized grids and improve numerical predictions in similar configurations.« less
Service Quality Assessment of Hospitals in Asian Context: An Empirical Evidence From Pakistan
Shafiq, Muhammad; Naeem, Muhammad Azhar; Munawar, Zartasha; Fatima, Iram
2017-01-01
Hospitals vary from one another in terms of their specialty, services offered, and resource availability. Their services are widely measured with scales that gauge patients’ perspective. Therefore, there is a need for research to develop a scale that measures hospital service quality in Asian hospitals, regardless of their nature or ownership. To address this research need, this study adapted the SERVQUAL instrument to develop a service quality measurement scale. Data were collected from inpatients and outpatients at 9 different hospitals, and the scale was developed using structural equation modeling. The developed scale was then validated by identifying service quality gaps and ranking the areas that require managerial effort. The findings indicated that all 5 dimensions of SERVQUAL are valid in Asian countries such as Pakistan, with 13 items retained. Reliability, tangibility, responsiveness, empathy, and assurance were ranked first, second, third, fourth, and fifth, respectively, in terms of the size of the quality gap. The gaps were statistically significant, with values ≤.05; therefore, hospital administrators must focus on each of these areas. By focusing on the identified areas of improvement, health care authorities, managers, practitioners, and decision makers can bring substantial change within hospitals. PMID:28660771
Service Quality Assessment of Hospitals in Asian Context: An Empirical Evidence From Pakistan.
Shafiq, Muhammad; Naeem, Muhammad Azhar; Munawar, Zartasha; Fatima, Iram
2017-01-01
Hospitals vary from one another in terms of their specialty, services offered, and resource availability. Their services are widely measured with scales that gauge patients' perspective. Therefore, there is a need for research to develop a scale that measures hospital service quality in Asian hospitals, regardless of their nature or ownership. To address this research need, this study adapted the SERVQUAL instrument to develop a service quality measurement scale. Data were collected from inpatients and outpatients at 9 different hospitals, and the scale was developed using structural equation modeling. The developed scale was then validated by identifying service quality gaps and ranking the areas that require managerial effort. The findings indicated that all 5 dimensions of SERVQUAL are valid in Asian countries such as Pakistan, with 13 items retained. Reliability, tangibility, responsiveness, empathy, and assurance were ranked first, second, third, fourth, and fifth, respectively, in terms of the size of the quality gap. The gaps were statistically significant, with values ≤.05; therefore, hospital administrators must focus on each of these areas. By focusing on the identified areas of improvement, health care authorities, managers, practitioners, and decision makers can bring substantial change within hospitals.
Modeling of turbulence and transition
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing
1992-01-01
The first objective is to evaluate current two-equation and second order closure turbulence models using available direct numerical simulations and experiments, and to identify the models which represent the state of the art in turbulence modeling. The second objective is to study the near-wall behavior of turbulence, and to develop reliable models for an engineering calculation of turbulence and transition. The third objective is to develop a two-scale model for compressible turbulence.
One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor
NASA Astrophysics Data System (ADS)
Liu, Lisheng; Zhang, Qingjie; Zhai, Pengcheng; Cao, Dongfeng
2008-02-01
An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic and considering the effect of strain rate on the dynamic yield strength.
One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Lisheng; Zhang Qingjie; Zhai Pengcheng
2008-02-15
An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic andmore » considering the effect of strain rate on the dynamic yield strength.« less
NASA Astrophysics Data System (ADS)
Qiu, Zeyang; Liang, Wei; Wang, Xue; Lin, Yang; Zhang, Meng
2017-05-01
As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor.
NASA Astrophysics Data System (ADS)
Altieri, Ada
2018-01-01
In view of the results achieved in a previously related work [A. Altieri, S. Franz, and G. Parisi, J. Stat. Mech. (2016) 093301], 10.1088/1742-5468/2016/09/093301, regarding a Plefka-like expansion of the free energy up to the second order in the perceptron model, we improve the computation here focusing on the role of third-order corrections. The perceptron model is a simple example of constraint satisfaction problem, falling in the same universality class as hard spheres near jamming and hence allowing us to get exact results in high dimensions for more complex settings. Our method enables to define an effective potential (or Thouless-Anderson-Palmer free energy), namely a coarse-grained functional, which depends on the generalized forces and the effective gaps between particles. The analysis of the third-order corrections to the effective potential reveals that, albeit irrelevant in a mean-field framework in the thermodynamic limit, they might instead play a fundamental role in considering finite-size effects. We also study the typical behavior of generalized forces and we show that two kinds of corrections can occur. The first contribution arises since the system is analyzed at a finite distance from jamming, while the second one is due to finite-size corrections. We nevertheless show that third-order corrections in the perturbative expansion vanish in the jamming limit both for the potential and the generalized forces, in agreement with the isostaticity argument proposed by Wyart and coworkers. Finally, we analyze the relevant scaling solutions emerging close to the jamming line, which define a crossover regime connecting the control parameters of the model to an effective temperature.
Altieri, Ada
2018-01-01
In view of the results achieved in a previously related work [A. Altieri, S. Franz, and G. Parisi, J. Stat. Mech. (2016) 093301]10.1088/1742-5468/2016/09/093301, regarding a Plefka-like expansion of the free energy up to the second order in the perceptron model, we improve the computation here focusing on the role of third-order corrections. The perceptron model is a simple example of constraint satisfaction problem, falling in the same universality class as hard spheres near jamming and hence allowing us to get exact results in high dimensions for more complex settings. Our method enables to define an effective potential (or Thouless-Anderson-Palmer free energy), namely a coarse-grained functional, which depends on the generalized forces and the effective gaps between particles. The analysis of the third-order corrections to the effective potential reveals that, albeit irrelevant in a mean-field framework in the thermodynamic limit, they might instead play a fundamental role in considering finite-size effects. We also study the typical behavior of generalized forces and we show that two kinds of corrections can occur. The first contribution arises since the system is analyzed at a finite distance from jamming, while the second one is due to finite-size corrections. We nevertheless show that third-order corrections in the perturbative expansion vanish in the jamming limit both for the potential and the generalized forces, in agreement with the isostaticity argument proposed by Wyart and coworkers. Finally, we analyze the relevant scaling solutions emerging close to the jamming line, which define a crossover regime connecting the control parameters of the model to an effective temperature.
Pellerone, Monica; Ramaci, Tiziana; Parrello, Santa; Guariglia, Paola; Giaimo, Flavio
2017-01-01
Family functioning plays an important role in developing and maintaining dysfunctional behaviors, especially during adolescence. The lack of indicators of family functioning, as determinants of personal and interpersonal problems, represents an obstacle to the activities aimed at developing preventive and intervention strategies. The Process Model of Family Functioning provides a conceptual framework organizing and integrating various concepts into a comprehensive family assessment; this model underlines that through the process of task accomplishment, each family meets objectives central to its life as a group. The Family Assessment Measure Third Edition (FAM III), based on the Process Model of Family Functioning, is among the most frequently used self-report instruments to measure family functioning. The present study aimed to evaluate the psychometric properties of the Italian version of the Family Assessment Measure Third Edition - Short Version (Brief FAM-III). It consists of three modules: General Scale, which evaluates the family as a system; Dyadic Relationships Scale, which examines how each family member perceives his/her relationship with another member; and Self-Rating Scale, which indicates how each family member is perceived within the nucleus. The developed Brief FAM-III together with the Family Assessment Device were administered to 484 subjects, members of 162 Italian families, formed of 162 fathers aged between 35 and 73 years; 162 mothers aged between 34 and 69 years; and 160 children aged between 12 and 35 years. Correlation, paired-sample t -test, and reliability analyses were carried out. General item analysis shows good indices of reliability with Cronbach's α coefficients equal to 0.96. The Brief FAM-III has satisfactory internal consistency, with Cronbach's α equal to 0.90 for General Scale, 0.94 for Dyadic Relationships Scale, and 0.88 for the Self-Rating Scale. The Brief FAM-III can be a psychometrically reliable and valid measure for the assessment of family strengths and weaknesses within Italian contexts. The instrument can be used to obtain an overall idea of family functioning, for the purposes of preliminary screening, and for monitoring family functioning over time or during treatment.
Pellerone, Monica; Ramaci, Tiziana; Parrello, Santa; Guariglia, Paola; Giaimo, Flavio
2017-01-01
Background Family functioning plays an important role in developing and maintaining dysfunctional behaviors, especially during adolescence. The lack of indicators of family functioning, as determinants of personal and interpersonal problems, represents an obstacle to the activities aimed at developing preventive and intervention strategies. The Process Model of Family Functioning provides a conceptual framework organizing and integrating various concepts into a comprehensive family assessment; this model underlines that through the process of task accomplishment, each family meets objectives central to its life as a group. The Family Assessment Measure Third Edition (FAM III), based on the Process Model of Family Functioning, is among the most frequently used self-report instruments to measure family functioning. Materials and methods The present study aimed to evaluate the psychometric properties of the Italian version of the Family Assessment Measure Third Edition – Short Version (Brief FAM-III). It consists of three modules: General Scale, which evaluates the family as a system; Dyadic Relationships Scale, which examines how each family member perceives his/her relationship with another member; and Self-Rating Scale, which indicates how each family member is perceived within the nucleus. The developed Brief FAM-III together with the Family Assessment Device were administered to 484 subjects, members of 162 Italian families, formed of 162 fathers aged between 35 and 73 years; 162 mothers aged between 34 and 69 years; and 160 children aged between 12 and 35 years. Correlation, paired-sample t-test, and reliability analyses were carried out. Results General item analysis shows good indices of reliability with Cronbach’s α coefficients equal to 0.96. The Brief FAM-III has satisfactory internal consistency, with Cronbach’s α equal to 0.90 for General Scale, 0.94 for Dyadic Relationships Scale, and 0.88 for the Self-Rating Scale. Conclusion The Brief FAM-III can be a psychometrically reliable and valid measure for the assessment of family strengths and weaknesses within Italian contexts. The instrument can be used to obtain an overall idea of family functioning, for the purposes of preliminary screening, and for monitoring family functioning over time or during treatment. PMID:28280402
EU-Norsewind Using Envisat ASAR And Other Data For Offshore Wind Atlas
NASA Astrophysics Data System (ADS)
Hasager, Charlotte B.; Mouche, Alexis; Badger, Merete
2010-04-01
The EU project NORSEWIND - short for Northern Seas Wind Index Database - www.norsewind.eu has the aim to produce state-of-the-art wind atlas for the Baltic, Irish and North Seas using ground-based lidar, meteorological masts, satellite data and mesoscale modelling. So far CLS and Risø DTU have collected Envisat ASAR images for the area of interest and the first results: maps of wind statistics, Weibull scale and shape parameters, mean and energy density are presented. The results will be compared to a distributed network of high-quality in-situ observations and mesoscale model results during 2009-2011 as the in-situ data and model results become available. Wind energy is proportional with wind speed to the third power, thus even small improvements on wind speed mapping are important in this project. One challenge is to arrive at hub-height winds ~100 m above sea level.
Theoretical and global scale model studies of the atmospheric sulfur/aerosol system
NASA Technical Reports Server (NTRS)
Kasibhatla, Prasad
1996-01-01
The primary focus during the third-phase of our on-going multi-year research effort has been on 3 activities. These are: (1) a global-scale model study of the anthropogenic component of the tropospheric sulfur cycle; (2) process-scale model studies of the factors influencing the distribution of aerosols in the remote marine atmosphere; and (3) an investigation of the mechanism of the OH-initiated oxidation of DMS in the remote marine boundary layer. In this paper, we describe in more detail our research activities in each of these areas. A major portion of our activities during the fourth and final phase of this project will involve the preparation and submission of manuscripts describing the results from our model studies of marine boundary-layer aerosols and DMS-oxidation mechanisms.
Modeling Structural Dynamics of Biomolecular Complexes by Coarse-Grained Molecular Simulations.
Takada, Shoji; Kanada, Ryo; Tan, Cheng; Terakawa, Tsuyoshi; Li, Wenfei; Kenzaki, Hiroo
2015-12-15
Due to hierarchic nature of biomolecular systems, their computational modeling calls for multiscale approaches, in which coarse-grained (CG) simulations are used to address long-time dynamics of large systems. Here, we review recent developments and applications of CG modeling methods, focusing on our methods primarily for proteins, DNA, and their complexes. These methods have been implemented in the CG biomolecular simulator, CafeMol. Our CG model has resolution such that ∼10 non-hydrogen atoms are grouped into one CG particle on average. For proteins, each amino acid is represented by one CG particle. For DNA, one nucleotide is simplified by three CG particles, representing sugar, phosphate, and base. The protein modeling is based on the idea that proteins have a globally funnel-like energy landscape, which is encoded in the structure-based potential energy function. We first describe two representative minimal models of proteins, called the elastic network model and the classic Go̅ model. We then present a more elaborate protein model, which extends the minimal model to incorporate sequence and context dependent local flexibility and nonlocal contacts. For DNA, we describe a model developed by de Pablo's group that was tuned to well reproduce sequence-dependent structural and thermodynamic experimental data for single- and double-stranded DNAs. Protein-DNA interactions are modeled either by the structure-based term for specific cases or by electrostatic and excluded volume terms for nonspecific cases. We also discuss the time scale mapping in CG molecular dynamics simulations. While the apparent single time step of our CGMD is about 10 times larger than that in the fully atomistic molecular dynamics for small-scale dynamics, large-scale motions can be further accelerated by two-orders of magnitude with the use of CG model and a low friction constant in Langevin dynamics. Next, we present four examples of applications. First, the classic Go̅ model was used to emulate one ATP cycle of a molecular motor, kinesin. Second, nonspecific protein-DNA binding was studied by a combination of elaborate protein and DNA models. Third, a transcription factor, p53, that contains highly fluctuating regions was simulated on two perpendicularly arranged DNA segments, addressing intersegmental transfer of p53. Fourth, we simulated structural dynamics of dinucleosomes connected by a linker DNA finding distinct types of internucleosome docking and salt-concentration-dependent compaction. Finally, we discuss many of limitations in the current approaches and future directions. Especially, more accurate electrostatic treatment and a phospholipid model that matches our CG resolutions are of immediate importance.
Defining Reward Value by Cross-Modal Scaling
Casey, Anna H.; Silberberg, Alan; Paukner, Annika; Suomi, Stephen J.
2013-01-01
Researchers in comparative psychology often use different food rewards in their studies, with food values defined by a pre-experimental preference test. While this technique rank orders food values, it provides limited information about value differences because preferences may reflect not only value differences, but also the degree to which one good may “substitute” for another (e.g., one food may substitute well for another food, but neither substitutes well for water). We propose scaling the value of food pairs by a third food that is less substitutable for either food offered in preference tests (cross-modal scaling). Here, Cebus monkeys chose between four pairwise alternatives: fruits A vs. B; cereal amount X vs. fruit A and cereal amount Y vs. fruit B where X and Y were adjusted to produce indifference between each cereal amount and each fruit; and cereal amounts X vs. Y. When choice was between perfect substitutes (different cereal amounts), preferences were nearly absolute; so too when choice was between close substitutes (fruits); however, when choice was between fruits and cereal amounts, preferences were more modest and less likely due to substitutability. These results suggest that scaling between-good value differences in terms of a third, less-substitutable good may be better than simple preference tests in defining between-good value differences. PMID:23771492
Effect of smoking, alcohol, and depression on the quality of life of head and neck cancer patients.
Duffy, Sonia A; Terrell, Jeffrey E; Valenstein, Marcia; Ronis, David L; Copeland, Laurel A; Connors, Mary
2002-01-01
This pilot study examined the relationship between smoking, alcohol intake, depressive symptoms and quality of life (QoL) in head and neck cancer patients. A questionnaire on smoking, alcohol, depressive symptoms and QoL was distributed to head and neck cancer patients (N=81). Over one-third (35%) of the respondents had smoked within the last 6 months, 46% had drunk alcohol within the last 6 months and 44% screened positive for significant depressive symptoms. About one-third (32%) of smokers were interested in smoking cessation services and 37% of patients with depressive symptoms were interested in depression services. However, only 9% of those who drank alcohol expressed interest in alcohol services. Smoking was negatively associated with five scales of the SF-36V including Physical Functioning, General Health, Vitality, Social Functioning, and Role-Emotional Health. Depressive symptoms were negatively associated with all eight scales on the SF-36V and all four scales of the Head and Neck Quality of Life instrument. Surprisingly, alcohol was not found to be associated with any of the QoL scales. While smoking, alcohol intake and depression may be episodically treated, standardized protocols and aggressive intervention strategies for systematically addressing these highly prevalent disorders are needed in this population.
High self-efficacy predicts adherence to surveillance colonoscopy in inflammatory bowel disease.
Friedman, Sonia; Cheifetz, Adam S; Farraye, Francis A; Banks, Peter A; Makrauer, Frederick L; Burakoff, Robert; Farmer, Barbara; Torgersen, Leanne N; Wahl, Kelly E
2014-09-01
Patients with extensive ulcerative colitis or Crohn's disease of the colon have an increased risk of colon cancer and require colonoscopic surveillance. In this study, we assessed individual self-efficacy (SE) to estimate the probability of adherence to surveillance colonoscopies. Three hundred seventy-eight patients with ulcerative colitis or Crohn's disease of the colon for at least 7 years and with at least one third of the colon involved participated in this cross-sectional questionnaire study performed at 3 tertiary referral inflammatory bowel disease clinics. Medical charts were abstracted for demographic and clinical variables. The questionnaire contained a group of items assessing SE for undergoing colonoscopy. We validated our 20-question SE scale and used 8 of the items that highlighted scheduling, preparation, and postprocedure recovery, to develop 2 shorter SE scales. All 3 scales were reliable with Cronbach's α ranging from 0.845 to 0.905 and correlated with chart-documented adherence to surveillance colonoscopy (P < 0.001). We then developed logistic regression models to predict adherence to surveillance colonoscopy using each scale separately along with other key variables (i.e., disease location, knowledge of correct adherence intervals, and information sources of patients consulted regarding Crohn's disease and ulcerative colitis) and demonstrated model accuracy up to 74%. SE, as measured by our validated scales, correlates with chart-adherence to surveillance colonoscopy. Our adherence model, which includes SE, predicts adherence with 74% certainty. An 8-item validated clinical questionnaire can be administered to assess whether patients in this population may require further intervention for adherence.
Acoustic and aerodynamic performance of a 1.83-meter (6-ft) diameter 1.25-pressure-ratio fan (QF-8)
NASA Technical Reports Server (NTRS)
Woodward, R. P.; Lucas, J. G.
1976-01-01
A 1.25-pressure-ratio 1.83-meter (6-ft) tip diameter experimental fan stage with characteristics suitable for engine application on STOL aircraft was tested for acoustic and aerodynamic performance. The design incorporated proven features for low noise, including absence of inlet guide vanes, low rotor blade tip speed, low aerodynamic blade loading, and long axial spacing between the rotor and stator blade rows. The fan was operated with five exhaust nozzle areas. The stage noise levels generally increased with a decrease in nozzle area. Separation of the acoustic one-third octave results into broadband and pure-tone components showed the broadband noise to be greater than the corresponding pure-tone components. The sideline perceived noise was highest in the rear quadrants. The acoustic results of QF-8 were compared with those of two similar STOL application fans in the test series. The QF-8 had somewhat higher relative noise levels than those of the other two fans. The aerodynamic results of QF-8 and the other two fans were compared with corresponding results from 50.8-cm (20-in.) diam scale models of these fans and design values. Although the results for the full-scale and scale models of the other two fans were in reasonable agreement for each design, the full-scale fan QF-8 results showed poor performance compared with corresponding model results and design expectations. Facility effects of the full-scale fan QF-8 installation were considered in analyzing this discrepancy.
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
NASA Astrophysics Data System (ADS)
Clavijo, H. W.
2016-12-01
Modeling the soil-plant-atmosphere continuum has been central part of understanding interrelationships among biogeochemical and hydrological processes. Theory behind of couplings Land Surface Models (LSM) and Dynamical Global Vegetation Models (DGVM) are based on physical and physiological processes connected by input-output interactions mainly. This modeling framework could be improved by the application of non-equilibrium thermodynamic basis that could encompass the majority of biophysical processes in a standard fashion. This study presents an alternative model for plant-water-atmosphere based on energy-mass thermodynamics. The system of dynamic equations derived is based on the total entropy, the total energy balance for the plant, the biomass dynamics at metabolic level and the water-carbon-nitrogen fluxes and balances. One advantage of this formulation is the capability to describe adaptation and evolution of dynamics of plant as a bio-system coupled to the environment. Second, it opens a window for applications on specific conditions from individual plant scale, to watershed scale, to global scale. Third, it enhances the possibility of analyzing anthropogenic impacts on the system, benefiting from the mathematical formulation and its non-linearity. This non-linear model formulation is analyzed under the concepts of qualitative system dynamics theory, for different state-space phase portraits. The attractors and sources are pointed out with its stability analysis. Possibility of bifurcations are explored and reported. Simulations for the system dynamics under different conditions are presented. These results show strong consistency and applicability that validates the use of the non-equilibrium thermodynamic theory.
Modelling the breakup of solid aggregates in turbulent flows
NASA Astrophysics Data System (ADS)
B?Bler, Matth?Us U.; Morbidelli, Massimo; Ba?Dyga, Jerzy
The breakup of solid aggregates suspended in a turbulent flow is considered. The aggregates are assumed to be small with respect to the Kolmogorov length scale and the flow is assumed to be homogeneous. Further, it is assumed that breakup is caused by hydrodynamic stresses acting on the aggregates, and breakup is therefore assumed to follow a first-order kinetic where KB(x) is the breakup rate function and x is the aggregate mass. To model KB(x), it is assumed that an aggregate breaks instantaneously when the surrounding flow is violent enough to create a hydrodynamic stress that exceeds a critical value required to break the aggregate. For aggregates smaller than the Kolmogorov length scale the hydrodynamic stress is determined by the viscosity and local energy dissipation rate whose fluctuations are highly intermittent. Hence, the first-order breakup kinetics are governed by the frequency with which the local energy dissipation rate exceeds a critical value (that corresponds to the critical stress). A multifractal model is adopted to describe the statistical properties of the local energy dissipation rate, and a power-law relation is used to relate the critical energy dissipation rate above which breakup occurs to the aggregate mass. The model leads to an expression for KB(x) that is zero below a limiting aggregate mass, and diverges for x . When simulating the breakup process, the former leads to an asymptotic mean aggregate size whose scaling with the mean energy dissipation rate differs by one third from the scaling expected in a non-fluctuating flow.
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
NASA Astrophysics Data System (ADS)
Morais, João; Bouhmadi-López, Mariam; Krämer, Manuel; Robles-Pérez, Salvador
2018-03-01
We analyze a quantized toy model of a universe undergoing eternal inflation using a quantum-field-theoretical formulation of the Wheeler-DeWitt equation. This so-called third quantization method leads to the picture that the eternally inflating universe is converted to a multiverse in which sub-universes are created and exhibit a distinctive phase in their evolution before reaching an asymptotic de Sitter phase. From the perspective of one of these sub-universes, we can thus analyze the pre-inflationary phase that arises naturally. Assuming that our observable universe is represented by one of those sub-universes, we calculate how this pre-inflationary phase influences the power spectrum of the cosmic microwave background (CMB) anisotropies and analyze whether it can explain the observed discrepancy of the power spectrum on large scales, i.e. the quadrupole issue in the CMB. While the answer to this question is negative in the specific model analyzed here, we point out a possible resolution of this issue.
Morais, João; Bouhmadi-López, Mariam; Krämer, Manuel; Robles-Pérez, Salvador
2018-01-01
We analyze a quantized toy model of a universe undergoing eternal inflation using a quantum-field-theoretical formulation of the Wheeler-DeWitt equation. This so-called third quantization method leads to the picture that the eternally inflating universe is converted to a multiverse in which sub-universes are created and exhibit a distinctive phase in their evolution before reaching an asymptotic de Sitter phase. From the perspective of one of these sub-universes, we can thus analyze the pre-inflationary phase that arises naturally. Assuming that our observable universe is represented by one of those sub-universes, we calculate how this pre-inflationary phase influences the power spectrum of the cosmic microwave background (CMB) anisotropies and analyze whether it can explain the observed discrepancy of the power spectrum on large scales, i.e. the quadrupole issue in the CMB. While the answer to this question is negative in the specific model analyzed here, we point out a possible resolution of this issue.
No third-party punishment in chimpanzees
Riedl, Katrin; Jensen, Keith; Call, Josep; Tomasello, Michael
2012-01-01
Punishment can help maintain cooperation by deterring free-riding and cheating. Of particular importance in large-scale human societies is third-party punishment in which individuals punish a transgressor or norm violator even when they themselves are not affected. Nonhuman primates and other animals aggress against conspecifics with some regularity, but it is unclear whether this is ever aimed at punishing others for noncooperation, and whether third-party punishment occurs at all. Here we report an experimental study in which one of humans' closest living relatives, chimpanzees (Pan troglodytes), could punish an individual who stole food. Dominants retaliated when their own food was stolen, but they did not punish when the food of third-parties was stolen, even when the victim was related to them. Third-party punishment as a means of enforcing cooperation, as humans do, might therefore be a derived trait in the human lineage. PMID:22927412
NASA Technical Reports Server (NTRS)
Lee, J. B.; Basford, R. C.
1957-01-01
As a continuation of an investigation of the ejection release characteristics of an internally carried MB-1 rocket in the Convair F-106A airplane, fin modifications at additional Mach numbers and simulated altitudes have been studied in the 27- by 27-inch preflight jet of the Langley Pilotless Aircraft Research Station at Wallops Island, Va. The MB-1 rocket was ejected with fins open, fins closed, fins closed with a shroud around the fins, and fins folded with a "boattail" placed in between the fins. Dynamically scaled models (0.0^956 scale) were tested at simulated altitudes of 12,000, 18,850, and 27,500 feet at subsonic Mach numbers and at 18,850, 27,500, and 40,000 feet for Mach numbers of 1-39, 1-59, and 1.98. Successful ejections can be obtained for over 10 store diameters from release point by the use of a shroud around the folded fins with the proper ejection velocity and nose-down pitching moment at release. In one case investigated it was found desirable to close off the front one-third of the bomb bay. It appeared that the fins should be opened after release and within 5 "to 6 rocket diameters if no modifications are made on the rocket. An increase in fuselage angle of attack caused higher nose-up pitch rates after release.
On the prediction of impact noise, V: The noise from drop hammers
NASA Astrophysics Data System (ADS)
Richards, E. J.; Carr, I.; Westcott, M.
1983-06-01
In earlier papers in this series, the concepts of "acceleration" and "ringing" noise have been studied in relation to impact machines, and values of radiation efficiency have been obtained for the various types of structural components. In the work reported in this paper the predicted and measured noise radiation from a drop hammer, both in full-scale and in {1}/{3}- scale model form, were examined. It is found that overall noise levels ( Leq per event) can be predicted from vibration measurements to within ± 1·5 dB, and to within ±2·5 dB in one-third octave bands. In turn this has permitted noise reduction techniques to be examined by studies of local component vibration levels rather than overall noise, a method which provides considerable enlightenment at the design stage. It is shown that on one particular drop hammer, the noise energy is shared surprisingly uniformly over four or five sources, and that when these have been reduced, the overall noise reduction is severely limited by the "acceleration" noise from the "tup" or "hammer" itself. As this is difficult to eliminate without a basic change in forging technology, it follows that "tup" enclosure or modification of the sharpness of the final "hard" impact are the only means available for any serious noise reduction. Also indicated is the reliability of using model techniques, suitably scaled in frequency and impulse magnitude, in developing machinery with impact characteristics.
ERIC Educational Resources Information Center
Oh, Hyeon-Joo; Glutting, Joseph J.; Watkins, Marley W.; Youngstrom, Eric A.; McDermott, Paul A.
2004-01-01
In this study, the authors used structural equation modeling to investigate relationships between ability constructs from the "Wechsler Intelligence Scale for Children-Third Edition" (WISC-III; Wechsler, 1991) in explaining reading and mathematics achievement constructs on the "Wechsler Individual Achievement Test" (WIAT;…
Toward a U.S. National Phenological Assessment
NASA Astrophysics Data System (ADS)
Henebry, Geoffrey M.; Betancourt, Julio L.
2010-01-01
Third USA National Phenology Network (USA-NPN) and Research Coordination Network (RCN) Annual Meeting; Milwaukee, Wisconsin, 5-9 October 2009; Directional climate change will have profound and lasting effects throughout society that are best understood through fundamental physical and biological processes. One such process is phenology: how the timing of recurring biological events is affected by biotic and abiotic forces. Phenology is an early and integrative indicator of climate change readily understood by nonspecialists. Phenology affects the planting, maturation, and harvesting of food and fiber; pollination; timing and magnitude of allergies and disease; recreation and tourism; water quantity and quality; and ecosystem function and resilience. Thus, phenology is the gateway to climatic effects on both managed and unmanaged ecosystems. Adaptation to climatic variability and change will require integration of phenological data and models with climatic forecasts at seasonal to decadal time scales. Changes in phenologies have already manifested myriad effects of directional climate change. As these changes continue, it is critical to establish a comprehensive suite of benchmarks that can be tracked and mapped at local to continental scales with observations and climate models.
BARTULI, F.N.; LUCIANI, F.; CADDEO, F.; DE CHIARA, L.; DI DIO, M.; PIVA, P.; OTTRIA, L.; ARCURI, C.
2013-01-01
SUMMARY Objective The aim of the Study was to compare the impacted third molar surgical technique by means of the high speed rotary handpiece with the piezoelectric one. Materials and Methods 192 patients have been selected among those who had to undergo a third molar surgical extraction. These patients’ surgeries have been performed by means of one of the techniques, randomly chosen. Each patient has undergone the same analgesic therapy (paracetamol 1000 mg tablets). Each surgery has been performed by the same surgeon. The patients were asked to fill in a questionnaire concerning the postoperative pain (“happy face pain” rating scale). Results The average duration of the surgeries performed by means of the high speed rotary handpiece was 32 minutes, while the duration of the ones performed by means of the piezoelectric handpiece was much longer (54 minutes). The postoperative pain values were almost equal. Conclusions In conclusion, the osteotomy performed by means of the traditional technique still represents the gold standard in the impacted third molar surgery. The piezoelectric technique may be an effective choice, especially for the less skilled surgeons, in order to guarantee the protection of the delicate locoregional anatomical structures. PMID:23991279
Bartuli, F N; Luciani, F; Caddeo, F; DE Chiara, L; DI Dio, M; Piva, P; Ottria, L; Arcuri, C
2013-01-01
The aim of the Study was to compare the impacted third molar surgical technique by means of the high speed rotary handpiece with the piezoelectric one. 192 patients have been selected among those who had to undergo a third molar surgical extraction. These patients' surgeries have been performed by means of one of the techniques, randomly chosen. Each patient has undergone the same analgesic therapy (paracetamol 1000 mg tablets). Each surgery has been performed by the same surgeon. The patients were asked to fill in a questionnaire concerning the postoperative pain ("happy face pain" rating scale). The average duration of the surgeries performed by means of the high speed rotary handpiece was 32 minutes, while the duration of the ones performed by means of the piezoelectric handpiece was much longer (54 minutes). The postoperative pain values were almost equal. In conclusion, the osteotomy performed by means of the traditional technique still represents the gold standard in the impacted third molar surgery. The piezoelectric technique may be an effective choice, especially for the less skilled surgeons, in order to guarantee the protection of the delicate locoregional anatomical structures.
The treatment options for posterior malleolar fractures in tibial spiral fractures.
Guo, Jialiang; Liu, Lei; Yang, Zongyou; Hou, Zhiyong; Chen, Wei; Zhang, Yingze
2017-09-01
The posterior malleolar fracture (PMF) in tibial spiral fractures are a common type of complication that occurs in tibial fractures. However, the indication of fixation for posterior fractures is still under debate and varies between different surgeons'. It is not unusual to find the smaller PMF (<25%), which could be treated conservatively within guidelines, treated with internal fixation in clinic. The aim of this study is to evaluate the clinical outcomes of tibial spiral fractures with PMF and provide proper guidance for the treatment of this special fracture. A total of 284 cases of spiral fractures combined with PMF were collected and analyzed. Demographic data, fragment size (classified by 25% involvement of ankle joint), time to weight-bearing and functional scores post-operatively were recorded. The ankle-hindfoot scale of the American Orthopaedic Foot and Ankle Society (AOFAS), a visual analogue scale (VAS) pain score, assessment of dorsiflexion restriction and arthritis scale were used as the main evaluations. Forty patients with a larger PMF (≥25%) and 72 with smaller ones (<25%) were fixed and categorized as the fixation group (FG). In the nonfixation group (NG), the corresponding numbers were four and 168 patients respectively. A total of 279 PMF were classified as large posterolateral triangular fragment carrying the posterior half of the fibular notch and intra-incisural posterolateral fragment involving one-fourth to one-third of the fibular notch. However, no obvious differences were observed in terms of the clinical outcomes in PMF involving one-fourth to one-third of the fibular notch. In the treatment of smaller PMF (<25%) of this type, there were no obvious differences in the functional outcomes between fixed (SF) and nonfixed PMF (SN). Many patients with smaller PMFs were fixated, but functional outcomes of SF were not better than those of SN. There is no need to emphasize other factors guiding the treatment of PMF involving one-fourth to one-third of the fibular notch in spiral fractures. The traditional size of PMF may be only enough to guide the treatment of spiral fracture with PMF. But other types of PMF should still be treated considering morphology and fragment simultaneously.
Shi, Wuxian; Chance, Mark R.
2010-01-01
About one-third of all proteins are associated with a metal. Metalloproteomics is defined as the structural and functional characterization of metalloproteins on a genome-wide scale. The methodologies utilized in metalloproteomics, including both forward (bottom-up) and reverse (top-down) technologies, to provide information on the identity, quantity and function of metalloproteins are discussed. Important techniques frequently employed in metalloproteomics include classical proteomics tools such as mass spectrometry and 2-D gels, immobilized-metal affinity chromatography, bioinformatics sequence analysis and homology modeling, X-ray absorption spectroscopy and other synchrotron radiation based tools. Combinative applications of these techniques provide a powerful approach to understand the function of metalloproteins. PMID:21130021
Full scale model investigation on the acoustical protection of a balcony-like façade device (L).
Tong, Y G; Tang, S K; Yeung, M K L
2011-08-01
The acoustical insertion losses produced by a balcony-like structure in front of a window are examined experimentally. The results suggest that the balcony ceiling is the most appropriate location for the installation of artificial sound absorption for the purpose of improving the broadband insertion loss, while the side walls are found to be the second best. Results also indicate that the acoustic modes of the balcony opening and the balcony cavity resonance in a direction normal to the window could have a great impact on the one-third octave band insertion losses. The maximum broadband road traffic noise insertion loss achieved is about 7 dB.
Sediment-transport experiments in zero-gravity
NASA Technical Reports Server (NTRS)
Iversen, James D.; Greeley, Ronald
1987-01-01
One of the important parameters in the analysis of sediment entrainment and transport is gravitational attraction. The availability of a laboratory in earth orbit would afford an opportunity to conduct experiments in zero and variable gravity environments. Elimination of gravitational attraction as a factor in such experiments would enable other critical parameters (such as particle cohesion and aerodynamic forces) to be evaluated much more accurately. A Carousel Wind Tunnel (CWT) is proposed for use in conducting experiments concerning sediment particle entrainment and transport in a space station. In order to test the concept of this wind tunnel design a one third scale model CWT was constructed and calibrated. Experiments were conducted in the prototype to determine the feasibility of studying various aeolian processes and the results were compared with various numerical analysis. Several types of experiments appear to be feasible utilizing the proposed apparatus.
Sediment-transport experiments in zero-gravity
NASA Technical Reports Server (NTRS)
Iversen, J. D.; Greeley, R.
1986-01-01
One of the important parameters in the analysis of sediment entrainment and transport is gravitational attraction. The availability of a laboratory in Earth orbit would afford an opportunity to conduct experiments in zero and variable gravity environments. Elimination of gravitational attraction as a factor in such experiments would enable other critical parameters (such as particle cohesion and aerodynamic forces) to be evaluated much more accurately. A Carousel Wind Tunnel (CWT) is proposed for use in conducting experiments concerning sediment particle entrainment and transport in a space station. In order to test the concept of this wind tunnel design a one third scale model CWT was constructed and calibrated. Experiments were conducted in the prototype to determine the feasibility of studying various aeolian processes and the results were compared with various numerical analysis. Several types of experiments appear to be feasible utilizing the proposed apparatus.
Badin, Antoine-Scott; Fermani, Francesco; Greenfield, Susan A
2016-01-01
"Neuronal assemblies" are defined here as coalitions within the brain of millions of neurons extending in space up to 1-2 mm, and lasting for hundreds of milliseconds: as such they could potentially link bottom-up, micro-scale with top-down, macro-scale events. The perspective first compares the features in vitro versus in vivo of this underappreciated "meso-scale" level of brain processing, secondly considers the various diverse functions in which assemblies may play a pivotal part, and thirdly analyses whether the surprisingly spatially extensive and prolonged temporal properties of assemblies can be described exclusively in terms of classic synaptic transmission or whether additional, different types of signaling systems are likely to operate. Based on our own voltage-sensitive dye imaging (VSDI) data acquired in vitro we show how restriction to only one signaling process, i.e., synaptic transmission, is unlikely to be adequate for modeling the full profile of assemblies. Based on observations from VSDI with its protracted spatio-temporal scales, we suggest that two other, distinct processes are likely to play a significant role in assembly dynamics: "volume" transmission (the passive diffusion of diverse bioactive transmitters, hormones, and modulators), as well as electrotonic spread via gap junctions. We hypothesize that a combination of all three processes has the greatest potential for deriving a realistic model of assemblies and hence elucidating the various complex brain functions that they may mediate.
Pre- and post-flight-test models versus measured skyship-500 control responses
NASA Technical Reports Server (NTRS)
Jex, Henry R.; Magdaleno, Raymond E.; Gelhausen, Paul; Tischler, Mark B.
1987-01-01
The dynamical equations-of-motion (EOM) for cruising airships require nonconventional terms to account for buoyancy and apparent-mass-effects, but systematic validation of these equations against flight data is not available. Using a candidate set of EOM, three comparisons are made with carefully-measured describing functions derived from frequency-sweep flight tests on the Skyship-500 airship. The first compares the pre-flight predictions to the data; the second compares the 'best-fit' equations to data at each of two airspeeds and the third compared the ability to extrapolate from one condition to another via airship-specific scaling laws. Two transient responses are also compared. The generally good results demonstrate that fairly simple, perturbation equation models are adequate for many types of flight-control analysis and flying quality evaluations of cruising airships.
Dankers, Rutger; Arnell, Nigel W.; Clark, Douglas B.; Falloon, Pete D.; Fekete, Balázs M.; Gosling, Simon N.; Heinke, Jens; Kim, Hyungjun; Masaki, Yoshimitsu; Satoh, Yusuke; Stacke, Tobias; Wada, Yoshihide; Wisser, Dominik
2014-01-01
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20–45%) of the global land grid points, particularly in areas where the hydrograph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5–30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies. PMID:24344290
NASA Astrophysics Data System (ADS)
Lynch, K. A.; Clayton, R.; Roberts, T. M.; Hampton, D. L.; Conde, M.; Zettergren, M. D.; Burleigh, M.; Samara, M.; Michell, R.; Grubbs, G. A., II; Lessard, M.; Hysell, D. L.; Varney, R. H.; Reimer, A.
2017-12-01
The NASA auroral sounding rocket mission Isinglass was launched from Poker Flat Alaska in winter 2017. This mission consists of two separate multi-payload sounding rockets, over an array of groundbased observations, including radars and filtered cameras. The science goal is to collect two case studies, in two different auroral events, of the gradient scale sizes of auroral disturbances in the ionosphere. Data from the in situ payloads and the groundbased observations will be synthesized and fed into an ionospheric model, and the results will be studied to learn about which scale sizes of ionospheric structuring have significance for magnetosphere-ionosphere auroral coupling. The in situ instrumentation includes thermal ion sensors (at 5 points on the second flight), thermal electron sensors (at 2 points), DC magnetic fields (2 point), DC electric fields (one point, plus the 4 low-resource thermal ion RPA observations of drift on the second flight), and an auroral precipitation sensor (one point). The groundbased array includes filtered auroral imagers, the PFISR and SuperDarn radars, a coherent scatter radar, and a Fabry-Perot interferometer array. The ionospheric model to be used is a 3d electrostatic model including the effects of ionospheric chemistry. One observational and modelling goal for the mission is to move both observations and models of auroral arc systems into the third (along-arc) dimension. Modern assimilative tools combined with multipoint but low-resource observations allow a new view of the auroral ionosphere, that should allow us to learn more about the auroral zone as a coupled system. Conjugate case studies such as the Isinglass rocket flights allow for a test of the models' intepretation by comparing to in situ data. We aim to develop and improve ionospheric models to the point where they can be used to interpret remote sensing data with confidence without the checkpoint of in situ comparison.
Peatland hydrology and carbon release: why small-scale process matters.
Holden, Joseph
2005-12-15
Peatlands cover over 400 million hectares of the Earth's surface and store between one-third and one-half of the world's soil carbon pool. The long-term ability of peatlands to absorb carbon dioxide from the atmosphere means that they play a major role in moderating global climate. Peatlands can also either attenuate or accentuate flooding. Changing climate or management can alter peatland hydrological processes and pathways for water movement across and below the peat surface. It is the movement of water in peats that drives carbon storage and flux. These small-scale processes can have global impacts through exacerbated terrestrial carbon release. This paper will describe advances in understanding environmental processes operating in peatlands. Recent (and future) advances in high-resolution topographic data collection and hydrological modelling provide an insight into the spatial impacts of land management and climate change in peatlands. Nevertheless, there are still some major challenges for future research. These include the problem that impacts of disturbance in peat can be irreversible, at least on human time-scales. This has implications for the perceived success and understanding of peatland restoration strategies. In some circumstances, peatland restoration may lead to exacerbated carbon loss. This will also be important if we decide to start to create peatlands in order to counter the threat from enhanced atmospheric carbon.
Small-scale electrical resistivity tomography of wet fractured rocks.
LaBrecque, Douglas J; Sharpe, Roger; Wood, Thomas; Heath, Gail
2004-01-01
This paper describes a series of experiments that tested the ability of the electrical resistivity tomography (ERT) method to locate correctly wet and dry fractures in a meso-scale model. The goal was to develop a method of monitoring the flow of water through a fractured rock matrix. The model was a four by six array of limestone blocks equipped with 28 stainless steel electrodes. Dry fractures were created by placing pieces of vinyl between one or more blocks. Wet fractures were created by injecting tap water into a joint between blocks. In electrical terms, the dry fractures are resistive and the wet fractures are conductive. The quantities measured by the ERT system are current and voltage around the outside edge of the model. The raw ERT data were translated to resistivity values inside the model using a three-dimensional Occam's inversion routine. This routine was one of the key components of ERT being tested. The model presented several challenges. First, the resistivity of both the blocks and the joints was highly variable. Second, the resistive targets introduced extreme changes the software could not precisely quantify. Third, the abrupt changes inherent in a fracture system were contrary to the smoothly varying changes expected by the Occam's inversion routine. Fourth, the response of the conductive fractures was small compared to the background variability. In general, ERT was able to locate correctly resistive fractures. Problems occurred, however, when the resistive fracture was near the edges of the model or when multiple fractures were close together. In particular, ERT tended to position the fracture closer to the model center than its true location. Conductive fractures yielded much smaller responses than the resistive case. A difference-inversion method was able to correctly locate these targets.
Higgs boson, sparticle masses and neutralino Dark Matter in Yukawa unified models
NASA Astrophysics Data System (ADS)
Un, Cem Salih
This dissertation collects our results that we obtain for a class of Yukawa unified SO(10) grand unified theories with non-universal soft supersymmetry breaking (SSB) gaugino mass parameters. As known for a long time, in contrast to its non-supersymmetrical version, SO(10) grand unified theories predict Yukawa coupling unification as well as gauge coupling and matter field unifications. The models considered in this thesis are assumed to be in the framework of gravity mediated supersymmetry breaking, and boundary conditions among the SSB terms are set by the group theoretical structure and breaking patterns of SO(10) at the grand unification scale (MGUT). In addition, we assume universality in the SSB mass terms assigned to the sfermion generations. Since Yukawa coupling unification implies contradictory mass relations for the first two generations, we consider a model with a larger Higgs sector. In this case, we assume that the MSSM Higgs doublets solely reside in 10 dimensional representation (10 H) of SO(10) and extra Higgs fields negligibly couple to the third generation sfermions in order to maintain Yukawa coupling unification for the third generation (when we mention Yukawa unification throughout this thesis, we mean Yukawa unification for the third family, a.k.a. t -b-tau Yukawa unification). First we consider a supersymmetric grand unified model in which SO(10) breaks into the MSSM via non-renormalizable dimension-5 operators involving non-singlet F--terms. In our case, we consider an F--term belonging to 54 dimensional representation of SO(10) and it develops a non-zero vacuum expectation value that non-trivially generates the SSB gaugino masses such that M 1 : M2 : M3 = --1 : --3 : 2. We consider the case with mu, M 1, M2 > 0 and M3 < 0 such that muM2 >0 and muM 3 < 0 always hold. This model with non-universal and relative-sign gaugino masses has one less parameter by setting the masses of Higgs doublets to be equivalent to each other at MGUT than those in the standard approach to Yukawa coupling unification. We briefly show also that Yukawa unification is possible even with one less parameter, if one considers a case in which all scalars of the MSSM including the Higgs doublets are assigned with the same SSB mass term. In the case of relative-sign SSB mass terms, the gaugino mass relation forms a subspace of SU(4)c x SU(2)L x SU(2) R (4-2-2). Even though 4-2-2 does not require gauge coupling unification, if one assumes that 4-2-2 breaks into the MSSM at an energy scale ˜ MGUT, then it can hold gauge coupling unification as well as Yukawa unification. As a generalization of the previous model, 4-2-2 results in a heavy spectrum for the color particles (˜ 3 TeV ) as well. We conclude this thesis by considering the anomalous magnetic moment of muon (muon g -- 2). First, we examine the conditions that are necessary in order to be consistent with the experimental measurements. Since the supersymmetric contribution to muon g -- 2 evolves as 1/M, where M is mass of the sparticle running in the loop, the MSSM needs to have light smuons and gauginos (bino and wino), while the 125 GeV Higgs boson requires heavier spectra. In order to resolve this conflict, we consider a case in which the first two generations of sfermions are split from the third generation in their SSB mass. Similarly the MSSM Higgs doublets have different masses from each other, while universality in gaugino masses is held. We show that our results can simultaneously be consistent with 125 GeV Higgs boson and muon g -- 2 within 1sigma deviation from its theoretical value. (Abstract shortened by UMI.)
Lergetporer, Philipp; Angerer, Silvia; Glätzle-Rützler, Daniela; Sutter, Matthias
2014-01-01
The human ability to establish cooperation, even in large groups of genetically unrelated strangers, depends upon the enforcement of cooperation norms. Third-party punishment is one important factor to explain high levels of cooperation among humans, although it is still somewhat disputed whether other animal species also use this mechanism for promoting cooperation. We study the effectiveness of third-party punishment to increase children’s cooperative behavior in a large-scale cooperation game. Based on an experiment with 1,120 children, aged 7 to 11 y, we find that the threat of third-party punishment more than doubles cooperation rates, despite the fact that children are rarely willing to execute costly punishment. We can show that the higher cooperation levels with third-party punishment are driven by two components. First, cooperation is a rational (expected payoff-maximizing) response to incorrect beliefs about the punishment behavior of third parties. Second, cooperation is a conditionally cooperative reaction to correct beliefs that third party punishment will increase a partner’s level of cooperation. PMID:24778231
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
NASA Astrophysics Data System (ADS)
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
Extension of relativistic dissipative hydrodynamics to third order
NASA Astrophysics Data System (ADS)
El, Andrej; Xu, Zhe; Greiner, Carsten
2010-04-01
Following the procedure introduced by Israel and Stewart, we expand the entropy current up to the third order in the shear stress tensor παβ and derive a novel third-order evolution equation for παβ. This equation is solved for the one-dimensional Bjorken boost-invariant expansion. The scaling solutions for various values of the shear viscosity to the entropy density ratio η/s are shown to be in very good agreement with those obtained from kinetic transport calculations. For the pressure isotropy starting with 1 at τ0=0.4 fm/c, the third-order corrections to Israel-Stewart theory are approximately 10% for η/s=0.2 and more than a factor of 2 for η/s=3. We also estimate all higher-order corrections to Israel-Stewart theory and demonstrate their importance in describing highly viscous matters.
[Psychopathology screening in medical school students].
Galván-Molina, Jesús Francisco; Jiménez-Capdeville, María E; Hernández-Mata, José María; Arellano-Cano, José Ramón
2017-01-01
Screening of psychopathology and associated factors in medical students employing an electronic self-report survey. A transversal, observational, and comparative study that consisted of the following instruments: Sociodemographic survey; 2. Adult Self-Report Scale-V1 (ASRS); State-Trait Anxiety Inventory (STAI); Zung and Conde Self-Rating Depression Scale, Almonte-Herskovic Sexual Orientation Self-Report; Plutchik Suicide Risk Scale; Alcohol Use Disorders Identification Test Identification (AUDIT); Fagerström Test for Nicotine Dependence; 9. Maslach Burnout Inventory (MBI); Eating Disorder Inventory 2 (EDI). We gathered 323 student surveys from medical students of the first, third and sixth grades. The three more prevalent disorders were depression (24%), attention deficit disorders with hyperactivity (28%) and anxiety (13%); the prevalence of high-level burnout syndrome was 13%. Also, the fifth part of the students had detrimental use of tobacco and alcohol. Sixty percent of medical students had either one or more probable disorder or burnout. An adequate screening and treatment of this population could prevent severe mental disorders and the associated factors could help us to create a risk profile. This model is an efficient research tool for screening and secondary prevention.
NASA Astrophysics Data System (ADS)
Anders, R.; Chrysikopoulos, C. V.
2003-12-01
As the use of tertiary-treated municipal wastewater (recycled water) for replenishment purposes continues to increase, provisions are being established to protect ground-water resources by ensuring that adequate soil-retention time and distance requirements are met for pathogen removal. However, many of the factors controlling virus fate and transport (e.g. hydraulic conditions, ground-water chemistry, and sediment mineralogy) are interrelated and poorly understood. Therefore, conducting field-scale experiments using surrogates for human enteric viruses at an actual recharge basin that uses recycled water may represent the best approach for establishing adequate setback requirements. Three field-scale infiltration experiments were conducted at such a basin using bacterial viruses (bacteriophage) MS2 and PRD1 as surrogates for human viruses, bromide as a conservative tracer, and recycled water. The specific research site consists of a test basin constructed adjacent to a large recharge facility (spreading grounds) located in the Montebello Forebay of Los Angeles County, California. The soil beneath the test basin is predominantly medium to coarse, moderately sorted, grayish-brown sand. The first experiment was conducted over a 2-day period to determine the feasibility of conducting field-scale infiltration experiments using recycled water seeded with high concentrations of bacteriophage and bromide as tracers. Based on the results of the first experiment, a second experiment was completed when similar hydraulic conditions existed at the test basin. The third infiltration experiment was conducted to confirm the results obtained from the second experiment. Data were obtained for samples collected during the second and third field-scale infiltration experiments from the test basin itself and from depths of 0.3, 0.6, 1.0, 1.5, 3.0, and 7.6 m below the bottom of the test basin. These field-scale tracer experiments indicate bacteriophage are attenuated by removal and (or) inactivation during subsurface transport. To simulate the transport and fate of viruses during infiltration, a nonlinear least-squares regression program was used to fit a one-dimensional virus transport model to the experimental data. The model simulates virus transport in homogeneous, saturated porous media with first-order adsorption (or filtration) and inactivation. Furthermore, the model obtains a semi-analytical solution for the special case of a broad pulse and time-dependent source concentration using the principle of superposition. The fitted parameters include the clogging and declogging rate constants and the inactivation constants of suspended and adsorbed viruses. Preliminary results show a reasonable match of the first arrival of bacteriophage and bromide.
Predicting academic performance of medical students: the first three years.
Höschl, C; Kozený, J
1997-06-01
The purpose of this exploratory study was to identify a cluster of variables that would most economically explain variations in the grade point averages of medical students during the first 3 years of study. Data were derived from a study of 92 students admitted to the 3rd Faculty of Medicine in 1992-1993 academic year and who were still in the medical school at the end of the sixth semester (third year). Stepwise regression analysis was used to build models for predicting log-transformed changes in grade point average after six semesters of study-at the end of the first, second, and third years. Predictor variables were chosen from four domains: 1) high school grade point averages in physics, mathematics, and the Czech language over 4 years of study, 2) results of admission tests in biology, chemistry, and physics, 3) admission committee's assessment of the applicant's ability to reproduce a text, motivation to study medicine, and social maturity, and 4) scores on the sentimentality and attachment scales of the Tridimensional Personality Questionnaire. The regression model, which included performance in high school physics, results of the admission test in physics, assessment of the applicant's motivation to study medicine, and attachment scale score, accounted for 32% of the change in grade point average over six semesters of study. The regression models using the first-, second-, and third-year grade point averages as the dependent variables showed slightly decreasing amounts of explained variance toward the end of the third year of study and within domains, changing the structure of predictor variables. The results suggest that variables chosen from the assessment domains of high school performance, written entrance examination, admission interview, and personality traits may be significant predictors of academic success during the first 3 years of medical study.
Comorbidities confounding the outcomes of surgery for third window syndrome: Outlier analysis
Mackay‐Promitas, Heather T.; Demirel, Shaban; Gianoli, Gerard J.; Gizzi, Martin S.; Carter, Dale M.; Siker, David A.
2017-01-01
Objective Patients with third window syndrome and superior semicircular canal dehiscence (SSCD) symptoms whose surgical outcomes placed them as outliers were systematically studied to determine comorbidities that were responsible for their poor outcomes due to these confounding factors. Study Design Observational analytic case‐control study in a tertiary referral center. Methods Twelve adult patients with clinical SSCD syndrome underwent surgical management and had outcomes that did not resolve all of their subjective symptoms. In addition to one of the neurotologists, 2 neurologists (one specializing in migraine and the other a neuro‐ophthalmologist), and a psychologist clinician‐investigator completed comprehensive evaluations. Neuropsychology test batteries included: the Millon Behavioral Medicine Diagnostic; Patient Health Questionnaire (PHQ‐9) and Generalized Anxiety Disorder Screener (GAD‐7); Adverse Childhood Experiences Scale; the Wide Range Assessment of Memory and Learning, including the 3 domains of verbal memory, visual memory, and attention/concentration; Wechsler Adult Intelligence Scale; and the Delis‐Kaplan Executive Function System. The control cohort was comprised of 17 participants who previously underwent surgery for third window syndrome that resulted in the expected outcomes of resolution of their third window syndrome symptoms and cognitive dysfunction. Results There was a high rate of psychological comorbidity (n = 6) in the outlier cohort; multiple traumatic brain injuries were also a confounding element (n = 10). One patient had elevated cerebrospinal fluid (CSF) pressure requiring ventriculoperitoneal shunting to control the recurrence of dehiscence and one patient with a drug‐induced Parkinson‐like syndrome and idiopathic progressive neurological degenerative process. Conclusions Components of the Millon Behavioral Medicine Diagnostic, PHQ‐9 and GAD‐7 results suggest that these instruments would be useful as screening tools preoperatively to identify psychological comorbidities that could confound outcomes. The identification of these comorbid psychological as well as other neurological degenerative disease processes led to alternate clinical management pathways for these patients. Level of Evidence 2b. PMID:29094067
Dynamics of tax evasion through an epidemic-like model
NASA Astrophysics Data System (ADS)
Brum, Rafael M.; Crokidakis, Nuno
In this work, we study a model of tax evasion. We considered a fixed population divided in three compartments, namely honest tax payers, tax evaders and a third class between the mentioned two, which we call susceptibles to become evaders. The transitions among those compartments are ruled by probabilities, similarly to a model of epidemic spreading. These probabilities model social interactions among the individuals, as well as the government’s fiscalization. We simulate the model on fully-connected graphs, as well as on scale-free and random complex networks. For the fully-connected and random graph cases, we observe that the emergence of tax evaders in the population is associated with an active-absorbing nonequilibrium phase transition, that is absent in scale-free networks.
Yu, Lu; Xie, Dong; Shek, Daniel T. L.
2012-01-01
This study examined the factor structure of a scale based on the four-dimensional gender identity model (Egan and Perry, 2001) in 726 Chinese elementary school students. Exploratory factor analyses suggested a three-factor model, two of which corresponded to “Felt Pressure” and “Intergroup Bias” in the original model. The third factor “Gender Compatibility” appeared to be a combination of “Gender Typicality” and “Gender Contentment” in the original model. Follow-up confirmatory factor analysis (CFA) indicated that, relative to the initial four-factor structure, the three-factor model fits the current Chinese sample better. These results are discussed in light of cross-cultural similarities and differences in development of gender identity. PMID:22701363
Modifiying shallow-water equations as a model for wave-vortex turbulence
NASA Astrophysics Data System (ADS)
Mohanan, A. V.; Augier, P.; Lindborg, E.
2017-12-01
The one-layer shallow-water equations is a simple two-dimensional model to study the complex dynamics of the oceans and the atmosphere. We carry out forced-dissipative numerical simulations, either by forcing medium-scale wave modes, or by injecting available potential energy (APE). With pure wave forcing in non-rotating cases, a statistically stationary regime is obtained for a range of forcing Froude numbers Ff = ɛ /(kf c), where ɛ is the energy dissipation rate, kf the forcing wavenumber and c the wave speed. Interestingly, the spectra scale as k-2 and third and higher order structure functions scale as r. Such statistics is a manifestation of shock turbulence or Burgulence, which dominate the flow. Rotating cases exhibit some inverse energy cascade, along with a stronger forward energy cascade, dominated by wave-wave interactions. We also propose two modifications to the classical shallow-water equations to construct a toy model. The properties of the model are explored by forcing in APE at a small and a medium wavenumber. The toy model simulations are then compared with results from shallow-water equations and a full General Circulation Model (GCM) simulation. The most distinctive feature of this model is that, unlike shallow-water equations, it avoids shocks and conserves quadratic energy. In Fig. 1, for the shallow-water equations, shocks appear as thin dark lines in the divergence (∇ .{u}) field, and as discontinuities in potential temperature (θ ) field; whereas only waves appear in the corresponding fields from toy model simulation. Forward energy cascade results in a wave field with k-5/3 spectrum, along with equipartition of KE and APE at small scales. The vortical field develops into a k-3 spectrum. With medium forcing wavenumber, at large scales, energy converted from APE to KE undergoes inverse cascade as a result of nonlinear fluxes composed of vortical modes alone. Gradually, coherent vortices emerge with a strong preference for anticyclonic motion. The model can serve as a closer representation of real geophysical turbulence than the classical shallow-water equations. Fig 1. Divergence and potential temperature fields of shallow-water (top row) and toy model (bottom row) simulations.
Declining body size: a third universal response to warming?
Gardner, Janet L; Peters, Anne; Kearney, Michael R; Joseph, Leo; Heinsohn, Robert
2011-06-01
A recently documented correlate of anthropogenic climate change involves reductions in body size, the nature and scale of the pattern leading to suggestions of a third universal response to climate warming. Because body size affects thermoregulation and energetics, changing body size has implications for resilience in the face of climate change. A review of recent studies shows heterogeneity in the magnitude and direction of size responses, exposing a need for large-scale phylogenetically controlled comparative analyses of temporal size change. Integrative analyses of museum data combined with new theoretical models of size-dependent thermoregulatory and metabolic responses will increase both understanding of the underlying mechanisms and physiological consequences of size shifts and, therefore, the ability to predict the sensitivities of species to climate change. Copyright © 2011 Elsevier Ltd. All rights reserved.
Xu, J-L; Sun, L; Liu, C; Sun, Z-H; Min, X; Xia, R
2015-09-01
The aim of this comprehensive meta-analysis was to provide evidence-based data to test whether oral contraceptive (OC) use can promote the incidence of dry socket (DS) in females following impacted mandibular third molar extraction. PubMed, the Cochrane Library, and Elsevier Science Direct databases were searched. The pooled risk ratio (RR) with 95% confidence interval (CI) was calculated using fixed-effects or random-effects model analysis. Heterogeneity among studies was evaluated with the Cochran test and I(2) statistic. Study quality was assessed with the Newcastle-Ottawa scale. Of 70 articles identified in the search, 12 reporting 16 clinical controlled trials were included in this study. The incidence of DS was significantly greater in the OC groups than in the control groups (RR 1.80, 95% CI 1.33-2.43). Subgroup analyses showed that the unit assessed (tooth or patient), the region in which the study was conducted, and the intervention were not related to the incidence of DS in females taking OC after impacted mandibular third molar extraction. The sensitivity analysis showed no significant change when any one study was excluded. Publication bias was also not detected. This study suggests that OC use may promote the incidence of DS in females following impacted mandibular third molar extraction. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Scale to Measure Attitudes toward Information Technology
ERIC Educational Resources Information Center
Gokhale, Anu A.; Paul E. Brauchle; Kenton F. Machina
2013-01-01
The current post-secondary graduation rates in computing disciplines suggest American universities are only training enough students to fill one third of the projected 1.4 million technology and computing jobs available (National Center for Women and Information Technology, 2011). Pursuit of information technology (IT) majors depends, to a great…
Best, Adrian D; De Silva, R K; Thomson, W M; Tong, Darryl C; Cameron, Claire M; De Silva, Harsha L
2017-10-01
The use of opioids in combination with nonopioids is common practice for acute pain management after third molar surgery. One such combination is paracetamol, ibuprofen, and codeine. The authors assessed the efficacy of codeine when added to a regimen of paracetamol and ibuprofen for pain relief after third molar surgery. This study was a randomized, double-blinded, placebo-controlled trial conducted in patients undergoing the surgical removal of at least 1 impacted mandibular third molar requiring bone removal. Participants were randomly allocated to a control group (paracetamol 1,000 mg and ibuprofen 400 mg) or an intervention group (paracetamol 1,000 mg, ibuprofen 400 mg, and codeine 60 mg). All participants were treated under intravenous sedation and using identical surgical conditions and technique. Postoperative pain was assessed using the visual analog scale (VAS) every 3 hours (while awake) for the first 48 hours after surgery. Pain was globally assessed using a questionnaire on day 3 after surgery. There were 131 participants (36% men; control group, n = 67; intervention group, n = 64). Baseline characteristics were similar for the 2 groups. Data were analyzed using a modified intention-to-treat analysis and, for this, a linear mixed model was used. The model showed that the baseline VAS score was associated with subsequent VAS scores and that, with each 3-hour period, the VAS score increased by an average of 0.08. The treatment effect was not statistically meaningful, indicating there was no difference in recorded pain levels between the 2 groups during the first 48 hours after mandibular third molar surgery. Similarly, the 2 groups did not differ in their global ratings of postoperative pain. Codeine 60 mg added to a regimen of paracetamol 1,000 mg and ibuprofen 400 mg does not improve analgesia after third molar surgery. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Obtaining high-resolution stage forecasts by coupling large-scale hydrologic models with sensor data
NASA Astrophysics Data System (ADS)
Fries, K. J.; Kerkez, B.
2017-12-01
We investigate how "big" quantities of distributed sensor data can be coupled with a large-scale hydrologic model, in particular the National Water Model (NWM), to obtain hyper-resolution forecasts. The recent launch of the NWM provides a great example of how growing computational capacity is enabling a new generation of massive hydrologic models. While the NWM spans an unprecedented spatial extent, there remain many questions about how to improve forecast at the street-level, the resolution at which many stakeholders make critical decisions. Further, the NWM runs on supercomputers, so water managers who may have access to their own high-resolution measurements may not readily be able to assimilate them into the model. To that end, we ask the question: how can the advances of the large-scale NWM be coupled with new local observations to enable hyper-resolution hydrologic forecasts? A methodology is proposed whereby the flow forecasts of the NWM are directly mapped to high-resolution stream levels using Dynamical System Identification. We apply the methodology across a sensor network of 182 gages in Iowa. Of these sites, approximately one third have shown to perform well in high-resolution flood forecasting when coupled with the outputs of the NWM. The quality of these forecasts is characterized using Principal Component Analysis and Random Forests to identify where the NWM may benefit from new sources of local observations. We also discuss how this approach can help municipalities identify where they should place low-cost sensors to most benefit from flood forecasts of the NWM.
Guimarães, Marcus Valladares; Junior, Lúcio Honório de Carvalho; Terra, Dalton Lopes
2015-01-01
Objective: Assess clinical results using two different protocols, 10 years after ACL reconstruction surgery with the central third of quadriceps muscle tendon (QT). Method: Between November /1997 and April/1998, 25 patients were submitted to 25 ACL reconstructions with QT by transtibial technique. The bone portion of the graft was fixated on femoral tunnel with interference screw and the tendinous portion of tibial tunnel with screw with washer. Two patients injured the new when playing soccer. Six patients were not available for follow-up (24%). Seventeen patients were evaluated, 15 men and two women, with mean age at surgery time of 28.53 ± 6.64 years. All patients were examined at six months, one year, and ten years after surgery. Clinical evaluation was made by the Lysholm scale, and the knee evaluation, with the Hospital for Special Surgery scale. Results: The patients had their injuries operated after 9.87 ± 14.42 months of the accident. According to Lysholm scale, the results at the end of the first year were 98.71 ± 2.47 and, after 10 years, 97.35 ± 3.12. Using the Hospital for Special Surgery scale, the mean score was 95.07 ± 5.23 in one year, and 94.87 ± 4.16 in 10 years. All patients returned to their professional activities with the same previous status. Fifteen (88.24%) patients were able to return to their sports activities, one by modifying the practice, while another one switched to another sport. No patient complained of pain on the donor area in the medium and long term. The sports return rate was excellent, and no changes were found on the femoropatellar joint. PMID:27022511
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
An approach to multiscale modelling with graph grammars
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-01-01
Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929
Vector Third Moment of Turbulent MHD Fluctuations: Theory and Interpretation
NASA Astrophysics Data System (ADS)
Forman, M. A.; MacBride, B. T.; Smith, C. W.
2006-12-01
We call attention to the fact that a certain vector third moment of turbulent MHD fluctuations, even if they are anisotropic, obeys an exact scaling relation in the inertial range. Politano and Pouquet (1998, PP) proved it from the MHD equations specifically. It is a direct analog of the long-known von Karman-Howarth-Monin (KHM) vector relation in anisotropic hydrodynamic turbulence, which follows from the Navier-Stokes equations (see Frisch, 1995). The relevant quantities in MHD are the plus and minus Elsasser vectors and their fluctuations over vector spatial differences. These are used in the mixed vector third moment S+/-(r). The mixed moment is essential, because in the MHD equations for the Elsasser variables, the z + and z- are mixed in the non-linear term. The PP relation is div (S+/-(r))= -4*(epsilon +/-) where (epsilon +/-) is the turbulent energy dissipation rate in the +/- cascade, in Joules/(kg-sec). Of the many possible vector and tensor third moments of MHD vector fluctuations, S+/-(r) is the only one known to have an exact (although vector differential) scaling valid in anisotropic MHD in the inertial range. The PP scaling of a distinctly non-zero third moment indicates that an inertial range cascade is present. The PP scaling does NOT simply result from a dimensional argument, but is derived directly from the MHD equations. A power-law power spectrum alone does not necessarily imply an inertial cascade is present. Furthermore, only the scaling of S+/-(r) gives the epsilon +/- directly. Earlier methods of determining epsilon +/-, based on the amplitude of the power spectrum, make assumptions about isotropy, Alfvenicity and scaling that are not exact. Thus, the observation of a finite S+/-(r) and its scaling with vector r, are fundamental to MHD turbulence in the solar wind, or in any magnetized plasma. We are engaged in evaluating S+/-(r )and its anisotropic scaling in the solar wind, beginning with ACE field and plasma data. For this, we are using the Taylor hypothesis that r = Vt, where t is a time lag of fluctuations seen at a single spacecraft. Because we use a forward time lag, we actually measure -S+/-(r ) which is positive in a direct cascade. We report some results in an accompanying poster. This presentation concentrates on the theory, and how the results are to be interpreted. References: Frisch, U., Turbulence, Cambridge U. Press, 1995, p. 78 Politano, H. and Pouquet, A. Geophys. Res. Lett., 25, 273, 1998
Use of a Scale Model in the Design of Modifications to the NASA Glenn Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Canacci, Victor A.; Gonsalez, Jose C.; Spera, David A.; Burke, Thomas (Technical Monitor)
2001-01-01
Major modifications were made in 1999 to the 6- by 9-Foot (1.8- by 2.7-m) Icing Research tunnel (IRT) at the NASA Glenn Research Center, including replacement of its heat exchanger and associated ducts and turning vanes, and the addition of fan outlet guide vanes (OGV's). A one-tenth scale model of the IRT (designated as the SMIRT) was constructed with and without these modifications and tested to increase confidence in obtaining expected improvements in flow quality around the tunnel loop. The SMIRT is itself an aerodynamic test facility whose flow patterns without modifications have been shown to be accurate, scaled representations of those measured in the IRT prior to the 1999 upgrade program. In addition, tests in the SMIRT equipped with simulated OGV's indicated that these devices in the IRT might reduce flow distortions immediately downstream of the fan by two thirds. Flow quality parameters measured in the SMIRT were projected to the full-size modified IRT, and quantitative estimates of improvements in flow quality were given prior to construction. In this paper, the results of extensive flow quality studies conducted in the SMIRT are documented. Samples of these are then compared with equivalent measurements made in the full-scale IRT, both before and after its configuration was upgraded. Airspeed, turbulence intensity, and flow angularity distributions are presented for cross sections downstream of the drive fan, both upstream and downstream of the replacement flat heat exchanger, in the stilling chamber, in the test section, and in the wakes of the new comer turning vanes with their unique expanding and contracting designs. Lessons learned from these scale-model studies are discussed.
On Which Microphysical Time Scales to Use in Studies of Entrainment-Mixing Mechanisms in Clouds
Lu, Chunsong; Liu, Yangang; Zhu, Bin; ...
2018-03-23
The commonly used time scales in entrainment-mixing studies are examined in this paper to seek the most appropriate one, based on aircraft observations of cumulus clouds from the RACORO campaign and numerical simulations with the Explicit Mixing Parcel Model. The time scales include: τ evap, the time for droplet complete evaporation; τ phase, the time for saturation ratio deficit (S) to reach 1/e of its initial value; τ satu, the time for S to reach -0.5%; τ react, the time for complete droplet evaporation or S to reach -0.5%. It is found that the proper time scale to use dependsmore » on the specific objectives of entrainment-mixing studies. First, if the focus is on the variations of liquid water content (LWC) and S, then τ react for saturation, τ satu and τ phase are almost equivalently appropriate, because they all represent the rate of dry air reaching saturation or of LWC decrease. Second, if one focuses on the variations of droplet size and number concentration, τ react for complete evaporation and τ evap are proper because they characterize how fast droplets evaporate and whether number concentration decreases. Moreover, τ react for complete evaporation and τ evap are always positively correlated with homogeneous mixing degree (ψ), thus the two time scales, especially τ evap, are recommended for developing parameterizations. However, ψ and the other time scales can be negatively, positively, or not correlated, depending on the dominant factors of the entrained air (i.e., relative humidity or aerosols). Third and finally, all time scales are proportional to each other under certain microphysical and thermodynamic conditions.« less
On Which Microphysical Time Scales to Use in Studies of Entrainment-Mixing Mechanisms in Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Chunsong; Liu, Yangang; Zhu, Bin
The commonly used time scales in entrainment-mixing studies are examined in this paper to seek the most appropriate one, based on aircraft observations of cumulus clouds from the RACORO campaign and numerical simulations with the Explicit Mixing Parcel Model. The time scales include: τ evap, the time for droplet complete evaporation; τ phase, the time for saturation ratio deficit (S) to reach 1/e of its initial value; τ satu, the time for S to reach -0.5%; τ react, the time for complete droplet evaporation or S to reach -0.5%. It is found that the proper time scale to use dependsmore » on the specific objectives of entrainment-mixing studies. First, if the focus is on the variations of liquid water content (LWC) and S, then τ react for saturation, τ satu and τ phase are almost equivalently appropriate, because they all represent the rate of dry air reaching saturation or of LWC decrease. Second, if one focuses on the variations of droplet size and number concentration, τ react for complete evaporation and τ evap are proper because they characterize how fast droplets evaporate and whether number concentration decreases. Moreover, τ react for complete evaporation and τ evap are always positively correlated with homogeneous mixing degree (ψ), thus the two time scales, especially τ evap, are recommended for developing parameterizations. However, ψ and the other time scales can be negatively, positively, or not correlated, depending on the dominant factors of the entrained air (i.e., relative humidity or aerosols). Third and finally, all time scales are proportional to each other under certain microphysical and thermodynamic conditions.« less
Dynamics of basaltic glass dissolution - Capturing microscopic effects in continuum scale models
NASA Astrophysics Data System (ADS)
Aradóttir, E. S. P.; Sigfússon, B.; Sonnenthal, E. L.; Björnsson, G.; Jónsson, H.
2013-11-01
The method of 'multiple interacting continua' (MINC) was applied to include microscopic rate-limiting processes in continuum scale reactive transport models of basaltic glass dissolution. The MINC method involves dividing the system up to ambient fluid and grains, using a specific surface area to describe the interface between the two. The various grains and regions within grains can then be described by dividing them into continua separated by dividing surfaces. Millions of grains can thus be considered within the method without the need to explicity discretizing them. Four continua were used for describing a dissolving basaltic glass grain; the first one describes the ambient fluid around the grain, while the second, third and fourth continuum refer to a diffusive leached layer, the dissolving part of the grain and the inert part of the grain, respectively. The model was validated using the TOUGHREACT simulator and data from column flow through experiments of basaltic glass dissolution at low, neutral and high pH values. Successful reactive transport simulations of the experiments and overall adequate agreement between measured and simulated values provides validation that the MINC approach can be applied for incorporating microscopic effects in continuum scale basaltic glass dissolution models. Equivalent models can be used when simulating dissolution and alteration of other minerals. The study provides an example of how numerical modeling and experimental work can be combined to enhance understanding of mechanisms associated with basaltic glass dissolution. Column outlet concentrations indicated basaltic glass to dissolve stoichiometrically at pH 3. Predictive simulations with the developed MINC model indicated significant precipitation of secondary minerals within the column at neutral and high pH, explaining observed non-stoichiometric outlet concentrations at these pH levels. Clay, zeolite and hydroxide precipitation was predicted to be most abundant within the column.
Benson, Nicholas; Beaujean, A Alexander; Taub, Gordon E
2015-01-01
The Flynn effect (FE; i.e., increase in mean IQ scores over time) is commonly viewed as reflecting population shifts in intelligence, despite the fact that most FE studies have not investigated the assumption of score comparability. Consequently, the extent to which these mean differences in IQ scores reflect population shifts in cognitive abilities versus changes in the instruments used to measure these abilities is unclear. In this study, we used modern psychometric tools to examine the FE. First, we equated raw scores for each common subtest to be on the same scale across instruments. This enabled the combination of scores from all three instruments into one of 13 age groups before converting raw scores into Z scores. Second, using age-based standardized scores for standardization samples, we examined measurement invariance across the second (revised), third, and fourth editions of the Wechsler Adult Intelligence Scale. Results indicate that while scores were equivalent across the third and fourth editions, they were not equivalent across the second and third editions. Results suggest that there is some evidence for an increase in intelligence, but also call into question many published FE findings as presuming the instruments' scores are invariant when this assumption is not warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sartorello, Giovanni; Olivier, Nicolas; Zhang, Jingjing
2016-08-17
We design and fabricate a metasurface composed of gold cut-disk resonators that exhibits a strong coherent nonlinear response. We experimentally demonstrate all-optical modulation of both second- and third-harmonic signals on a subpicosecond time scale. Pump probe experiments and numerical models show that the observed effects are due to the ultrafast response of the electronic excitations in the metal under external illumination. These effects pave the way for the development of novel active nonlinear metasurfaces with controllable and switchable coherent nonlinear response.
Computation of turbulent pipe and duct flow using third order upwind scheme
NASA Technical Reports Server (NTRS)
Kawamura, T.
1986-01-01
The fully developed turbulence in a circular pipe and in a square duct is simulated directly without using turbulence models in the Navier-Stokes equations. The utilized method employs a third-order upwind scheme for the approximation to the nonlinear term and the second-order Adams-Bashforth method for the time derivative in the Navier-Stokes equation. The computational results appear to capture the large-scale turbulent structures at least qualitatively. The significance of the artificial viscosity inherent in the present scheme is discussed.
Interrelationships of locus of control content dimensions and hopelessness.
Ward, L C; Thomas, L L
1985-07-01
Items from three locus of control (LOC) tests and the Beck Hopelessness Scale were administered to 197 college students. Factor analyses produced multiple factors for each LOC test, but the Beck scale proved to be unidimensional. Factor scales were constructed for each test, and scores were factor analyzed to discover common content. Each LOC test contained a salient dimension that described belief in luck, chance, or fate, and corresponding scales were well correlated. Internal control was the second common theme, with variations according to whether control was attributed to oneself or to people in general. The third common component expressed a personal helplessness or powerlessness. Each common factor was loaded by the Hopelessness Scale, which also correlated with all but one LOC factor scale.
[Research progress on hydrological scaling].
Liu, Jianmei; Pei, Tiefan
2003-12-01
With the development of hydrology and the extending effect of mankind on environment, scale issue has become a great challenge to many hydrologists due to the stochasticism and complexity of hydrological phenomena and natural catchments. More and more concern has been given to the scaling issues to gain a large-scale (or small-scale) hydrological characteristic from a certain known catchments, but hasn't been solved successfully. The first part of this paper introduced some concepts about hydrological scale, scale issue and scaling. The key problem is the spatial heterogeneity of catchments and the temporal and spatial variability of hydrological fluxes. Three approaches to scale were put forward in the third part, which were distributed modeling, fractal theory and statistical self similarity analyses. Existing problems and future research directions were proposed in the last part.
Modeling small-scale dairy farms in central Mexico using multi-criteria programming.
Val-Arreola, D; Kebreab, E; France, J
2006-05-01
Milk supply from Mexican dairy farms does not meet demand and small-scale farms can contribute toward closing the gap. Two multi-criteria programming techniques, goal programming and compromise programming, were used in a study of small-scale dairy farms in central Mexico. To build the goal and compromise programming models, 4 ordinary linear programming models were also developed, which had objective functions to maximize metabolizable energy for milk production, to maximize margin of income over feed costs, to maximize metabolizable protein for milk production, and to minimize purchased feedstuffs. Neither multi-criteria approach was significantly better than the other; however, by applying both models it was possible to perform a more comprehensive analysis of these small-scale dairy systems. The multi-criteria programming models affirm findings from previous work and suggest that a forage strategy based on alfalfa, ryegrass, and corn silage would meet nutrient requirements of the herd. Both models suggested that there is an economic advantage in rescheduling the calving season to the second and third calendar quarters to better synchronize higher demand for nutrients with the period of high forage availability.
McCrae, Robert R.; Scally, Matthew; Terracciano, Antonio; Abecasis, Gonçalo R.; Costa, Paul T.
2011-01-01
There is growing evidence that personality traits are affected by many genes, all of which have very small effects. As an alternative to the largely-unsuccessful search for individual polymorphisms associated with personality traits, we identified large sets of potentially related single nucleotide polymorphisms (SNPs) and summed them to form molecular personality scales (MPSs) with from 4 to 2,497 SNPs. Scales were derived from two-thirds of a large (N = 3,972) sample of individuals from Sardinia who completed the Revised NEO Personality Inventory and were assessed in a genome-wide association scan. When MPSs were correlated with the phenotype in the remaining third of the sample, very small but significant associations were found for four of the five personality factors when the longest scales were examined. These data suggest that MPSs for Neuroticism, Openness to Experience, Agreeableness, and Conscientiousness (but not Extraversion) contain genetic information that can be refined in future studies, and the procedures described here should be applicable to other quantitative traits. PMID:21114353
The interaction of spatial scale and predator-prey functional response
Blaine, T.W.; DeAngelis, D.L.
1997-01-01
Predator-prey models with a prey-dependent functional response have the property that the prey equilibrium value is determined only by predator characteristics. However, in observed natural systems (for instance, snail-periphyton interactions in streams) the equilibrium periphyton biomass has been shown experimentally to be influenced by both snail numbers and levels of available limiting nutrient in the water. Hypothesizing that the observed patchiness in periphyton in streams may be part of the explanation for the departure of behavior of the equilibrium biomasses from predictions of the prey-dependent response of the snail-periphyton system, we developed and analyzed a spatially-explicit model of periphyton in which snails were modeled as individuals in their movement and feeding, and periphyton was modeled as patches or spatial cells. Three different assumptions on snail movement were used: (1) random movement between spatial cells, (2) tracking by snails of local abundances of periphyton, and (3) delayed departure of snails from cells to reduce costs associated with movement. Of these assumptions, only the third strategy, based on an herbivore strategy of staying in one patch until local periphyton biomass concentration falls below a certain threshold amount, produced results in which both periphyton and snail biomass increased with nutrient input. Thus, if data are averaged spatially over the whole system, we expect that a ratio-dependent functional response may be observed if the herbivore behaves according to the third assumption. Both random movement and delayed cell departure had the result that spatial heterogeneity of periphyton increased with nutrient input.
NASA Astrophysics Data System (ADS)
Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.
2015-12-01
Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple hydrologic with hydrodynamic computations while discriminating between 1D-channels and 2D-floodplains. Such a fully-fledged set-up would be able to provide higher-order flood hazard information, e.g. time to flooding and flood duration, ultimately leading to improved flood risk assessment and management at the large scale.
Relativistic analysis of stochastic kinematics
NASA Astrophysics Data System (ADS)
Giona, Massimiliano
2017-10-01
The relativistic analysis of stochastic kinematics is developed in order to determine the transformation of the effective diffusivity tensor in inertial frames. Poisson-Kac stochastic processes are initially considered. For one-dimensional spatial models, the effective diffusion coefficient measured in a frame Σ moving with velocity w with respect to the rest frame of the stochastic process is inversely proportional to the third power of the Lorentz factor γ (w ) =(1-w2/c2) -1 /2 . Subsequently, higher-dimensional processes are analyzed and it is shown that the diffusivity tensor in a moving frame becomes nonisotropic: The diffusivities parallel and orthogonal to the velocity of the moving frame scale differently with respect to γ (w ) . The analysis of discrete space-time diffusion processes permits one to obtain a general transformation theory of the tensor diffusivity, confirmed by several different simulation experiments. Several implications of the theory are also addressed and discussed.
Spaide, Richard F; Curcio, Christine A
2011-09-01
To evaluate the validity of commonly used anatomical designations for the four hyperreflective outer retinal bands seen in current-generation optical coherence tomography, a scale model of outer retinal morphology was created using published information for direct comparison with optical coherence tomography scans. Articles and books concerning histology of the outer retina from 1900 until 2009 were evaluated, and data were used to create a scale model drawing. Boundaries between outer retinal tissue compartments described by the model were compared with intensity variations of representative spectral-domain optical coherence tomography scans using longitudinal reflectance profiles to determine the region of origin of the hyperreflective outer retinal bands. This analysis showed a high likelihood that the spectral-domain optical coherence tomography bands attributed to the external limiting membrane (the first, innermost band) and to the retinal pigment epithelium (the fourth, outermost band) are correctly attributed. Comparative analysis showed that the second band, often attributed to the boundary between inner and outer segments of the photoreceptors, actually aligns with the ellipsoid portion of the inner segments. The third band corresponded to an ensheathment of the cone outer segments by apical processes of the retinal pigment epithelium in a structure known as the contact cylinder. Anatomical attributions and subsequent pathophysiologic assessments pertaining to the second and third outer retinal hyperreflective bands may not be correct. This analysis has identified testable hypotheses for the actual correlates of the second and third bands. Nonretinal pigment epithelium contributions to the fourth band (e.g., Bruch membrane) remain to be determined.
Effects of Small Electrostatic Fields on the Ionospheric Density Profile
NASA Astrophysics Data System (ADS)
Salem, M. A.; Liu, N.; Rassoul, H.
2014-12-01
It is well known that short-lived strong electric fields produced by natural lightning activities in tropospheric altitudes can significantly affect the upper atmosphere. This effect is directly evidenced by the production of transient luminous events (TLEs), such as sprites, jets, and elves. It has also been demonstrated that thunderstorms can modify ionospheric densities on a longer time scale, during which TLEs may or may not occur [e.g., Cheng and Cummer, GRL, 32, L08804, 2005; Han and Cummer, JGR, 115, A09323, 2010; Shao et al., Nat. Geosci., doi: 10.1038/NGEO1668, 2012]. In particular, according to Shao et al. [2012], the electron density at 75-80 km altitudes may be reduced by about 2-3 orders of magnitude. In this talk, we study the modification of the ionospheric density profile by small electrostatic fields that may exist in the upper atmosphere during a thunderstorm. A simplified ion chemistry model described by Liu [JGR, 117, A03308, 2012] has been used to conduct this study. The model is based on the one developed by Lehtinen and Inan [GRL, 34, L08804, 2007], which is in turn an improved version of the GPI model discussed in Glukhov et al. [JGR, 97, 16971, 1992]. According to this model, the charged particles can be grouped into five species: electrons, light negative ions, cluster negative ions, light positive ions, and cluster positive ions. In this chemistry model, the three-body electron attachment is the only process whose rate constant depends on the electric field, when it is below about one third of the conventional breakdown threshold field. We have compared various sources of the three-body attachment rate constant. The result shows that the rate constant increases linearly with the reduced electric field in the range of 0 to 0.1 Td, while decreases exponentially from 0.1 Td to about one third of the conventional breakdown threshold field. With this dependence, our modeling results indicate that under the steady-state condition, the nighttime electron density profile can be reduced by about 40% or enhanced by a factor of about 6 when the electric field varies in the aforementioned range.
Optimizing BAO measurements with non-linear transformations of the Lyman-α forest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xinkang; Font-Ribera, Andreu; Seljak, Uroš, E-mail: xinkang.wang@berkeley.edu, E-mail: afont@lbl.gov, E-mail: useljak@berkeley.edu
2015-04-01
We explore the effect of applying a non-linear transformation to the Lyman-α forest transmitted flux F=e{sup −τ} and the ability of analytic models to predict the resulting clustering amplitude. Both the large-scale bias of the transformed field (signal) and the amplitude of small scale fluctuations (noise) can be arbitrarily modified, but we were unable to find a transformation that increases significantly the signal-to-noise ratio on large scales using Taylor expansion up to the third order. In particular, however, we achieve a 33% improvement in signal to noise for Gaussianized field in transverse direction. On the other hand, we explore anmore » analytic model for the large-scale biasing of the Lyα forest, and present an extension of this model to describe the biasing of the transformed fields. Using hydrodynamic simulations we show that the model works best to describe the biasing with respect to velocity gradients, but is less successful in predicting the biasing with respect to large-scale density fluctuations, especially for very nonlinear transformations.« less
Financing Solar PV at Government Sites with PPAs and Public Debt (Brochure)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-12-01
Historically, state and local governmental agencies have employed one of two models to deploy solar photovoltaic (PV) projects: (1) self-ownership (financed through a variety of means) or (2) third-party ownership through a power purchase agreement (PPA). Morris County, New Jersey, administrators recently pioneered a way to combine many of the benefits of self-ownership and third-party PPAs through a bond-PPA hybrid, frequently referred to as the Morris Model. At the request of the Department of Energy?s Solar Market Transformation group, NREL examined the hybrid model. This fact sheet describes how the hybrid model works, assesses the model?s relative advantages and challengesmore » as compared to self-ownership and the third-party PPA model, provides a quick guide to project implementation, and assesses the replicability of the model in other jurisdictions across the United States.« less
Statistical short-term earthquake prediction.
Kagan, Y Y; Knopoff, L
1987-06-19
A statistical procedure, derived from a theoretical model of fracture growth, is used to identify a foreshock sequence while it is in progress. As a predictor, the procedure reduces the average uncertainty in the rate of occurrence for a future strong earthquake by a factor of more than 1000 when compared with the Poisson rate of occurrence. About one-third of all main shocks with local magnitude greater than or equal to 4.0 in central California can be predicted in this way, starting from a 7-year database that has a lower magnitude cut off of 1.5. The time scale of such predictions is of the order of a few hours to a few days for foreshocks in the magnitude range from 2.0 to 5.0.
Quantum probability, choice in large worlds, and the statistical structure of reality.
Ross, Don; Ladyman, James
2013-06-01
Classical probability models of incentive response are inadequate in "large worlds," where the dimensions of relative risk and the dimensions of similarity in outcome comparisons typically differ. Quantum probability models for choice in large worlds may be motivated pragmatically - there is no third theory - or metaphysically: statistical processing in the brain adapts to the true scale-relative structure of the universe.
Extension of relativistic dissipative hydrodynamics to third order
DOE Office of Scientific and Technical Information (OSTI.GOV)
El, Andrej; Xu Zhe; Greiner, Carsten
2010-04-15
Following the procedure introduced by Israel and Stewart, we expand the entropy current up to the third order in the shear stress tensor pi{sup a}lpha{sup b}eta and derive a novel third-order evolution equation for pi{sup a}lpha{sup b}eta. This equation is solved for the one-dimensional Bjorken boost-invariant expansion. The scaling solutions for various values of the shear viscosity to the entropy density ratio eta/s are shown to be in very good agreement with those obtained from kinetic transport calculations. For the pressure isotropy starting with 1 at tau{sub 0}=0.4 fm/c, the third-order corrections to Israel-Stewart theory are approximately 10% for eta/s=0.2more » and more than a factor of 2 for eta/s=3. We also estimate all higher-order corrections to Israel-Stewart theory and demonstrate their importance in describing highly viscous matters.« less
Modeling the hydrodynamic and electrochemical efficiency of semi-solid flow batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunini, VE; Chiang, YM; Carter, WC
2012-05-01
A mathematical model of flow cell operation incorporating hydrodynamic and electrochemical effects in three dimensions is developed. The model and resulting simulations apply to recently demonstrated high energy-density semi-solid flow cells. In particular, state of charge gradients that develop during low flow rate operation and their effects on the spatial non-uniformity of current density within flow cells are quantified. A one-dimensional scaling model is also developed and compared to the full three-dimensional simulation. The models are used to demonstrate the impact of the choice of electrochemical couple on flow cell performance. For semi-solid flow electrodes, which can use solid activemore » materials with a wide variety of voltage-capacity responses, we find that cell efficiency is maximized for electrochemical couples that have a relatively flat voltage vs. capacity curve, operated under slow flow conditions. For example, in flow electrodes limited by macroscopic charge transport, an LiFePO4-based system requires one-third the polarization to reach the same cycling rate as an LiCoO2-based system, all else being equal. Our conclusions are generally applicable to high energy density flow battery systems, in which flow rates can be comparatively low for a given required power. (C) 2012 Elsevier Ltd. All rights reserved.« less
Research on Upgrading Structures for Host and Risk Area Shelters
1982-09-01
both "as-built" and upgraded configurations. These analysis and prediction techniques have been applied to floors and roofs constructed of many...scale program and were previously applied to full-scale wood floor tests (Ref. 2). TEST ELEMENTS AND PROCEDURES Three tests were conducted on 8-inch...weights. A 14,000-lb crane counterweight was used for the preload, applying a load of 7,000 lb to each one-third point on the plank. The drop weight was
ERIC Educational Resources Information Center
Kawasaki, Keiko; Rupert Herrenkohl, Leslie; Yeary, Sherry
2004-01-01
The purpose of this paper is to carefully examine the evolution of students' theory building and modeling, critical components of scientific epistemologies, over a unit of study on sinking and floating in one third/fourth grade classroom. The study described in this paper follows in the tradition of Design Experiments ( Brown 1992 , Collins 1990 )…
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man; Cheng, Anning
2007-01-01
The effects of subgrid-scale condensation and transport become more important as the grid spacings increase from those typically used in large-eddy simulation (LES) to those typically used in cloud-resolving models (CRMs). Incorporation of these effects can be achieved by a joint probability density function approach that utilizes higher-order moments of thermodynamic and dynamic variables. This study examines how well shallow cumulus and stratocumulus clouds are simulated by two versions of a CRM that is implemented with low-order and third-order turbulence closures (LOC and TOC) when a typical CRM horizontal resolution is used and what roles the subgrid-scale and resolved-scale processes play as the horizontal grid spacing of the CRM becomes finer. Cumulus clouds were mostly produced through subgrid-scale transport processes while stratocumulus clouds were produced through both subgrid-scale and resolved-scale processes in the TOC version of the CRM when a typical CRM grid spacing is used. The LOC version of the CRM relied upon resolved-scale circulations to produce both cumulus and stratocumulus clouds, due to small subgrid-scale transports. The mean profiles of thermodynamic variables, cloud fraction and liquid water content exhibit significant differences between the two versions of the CRM, with the TOC results agreeing better with the LES than the LOC results. The characteristics, temporal evolution and mean profiles of shallow cumulus and stratocumulus clouds are weakly dependent upon the horizontal grid spacing used in the TOC CRM. However, the ratio of the subgrid-scale to resolved-scale fluxes becomes smaller as the horizontal grid spacing decreases. The subcloud-layer fluxes are mostly due to the resolved scales when a grid spacing less than or equal to 1 km is used. The overall results of the TOC simulations suggest that a 1-km grid spacing is a good choice for CRM simulation of shallow cumulus and stratocumulus.
Building simulation: Ten challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Langevin, Jared; Sun, Kaiyu
Buildings consume more than one-third of the world’s primary energy. Reducing energy use and greenhouse-gas emissions in the buildings sector through energy conservation and efficiency improvements constitutes a key strategy for achieving global energy and environmental goals. Building performance simulation has been increasingly used as a tool for designing, operating and retrofitting buildings to save energy and utility costs. However, opportunities remain for researchers, software developers, practitioners and policymakers to maximize the value of building performance simulation in the design and operation of low energy buildings and communities that leverage interdisciplinary approaches to integrate humans, buildings, and the power gridmore » at a large scale. This paper presents ten challenges that highlight some of the most important issues in building performance simulation, covering the full building life cycle and a wide range of modeling scales. In conclusion, the formulation and discussion of each challenge aims to provide insights into the state-of-the-art and future research opportunities for each topic, and to inspire new questions from young researchers in this field.« less
Building simulation: Ten challenges
Hong, Tianzhen; Langevin, Jared; Sun, Kaiyu
2018-04-12
Buildings consume more than one-third of the world’s primary energy. Reducing energy use and greenhouse-gas emissions in the buildings sector through energy conservation and efficiency improvements constitutes a key strategy for achieving global energy and environmental goals. Building performance simulation has been increasingly used as a tool for designing, operating and retrofitting buildings to save energy and utility costs. However, opportunities remain for researchers, software developers, practitioners and policymakers to maximize the value of building performance simulation in the design and operation of low energy buildings and communities that leverage interdisciplinary approaches to integrate humans, buildings, and the power gridmore » at a large scale. This paper presents ten challenges that highlight some of the most important issues in building performance simulation, covering the full building life cycle and a wide range of modeling scales. In conclusion, the formulation and discussion of each challenge aims to provide insights into the state-of-the-art and future research opportunities for each topic, and to inspire new questions from young researchers in this field.« less
Chung, Man Cheung; Shakra, Mudar; AlQarni, Nowf; AlMazrouei, Mariam; Al Mazrouei, Sara; Al Hashimi, Shurooq
2018-01-01
This study revisited the prevalence of posttraumatic stress disorder (PTSD) and examined a hypothesized model describing the interrelationship between trauma exposure characteristics, trauma centrality, emotional suppression, PTSD, and psychiatric comorbidity among Syrian refugees. A total of 564 Syrian refugees participated in the study and completed the Harvard Trauma Questionnaire, General Health Questionnaire (GHQ-28), Centrality of Event Scale, and Courtauld Emotional Control Scale. Of the participants, 30% met the cutoff for PTSD. Trauma exposure characteristics (experiencing or witnessing horror and murder, kidnapping or disappearance of family members or friends) were associated with trauma centrality, which was associated with emotional suppression. Emotional suppression was associated with PTSD and psychiatric comorbid symptom severities. Suppression mediated the path between trauma centrality and distress outcomes. Almost one-third of refugees can develop PTSD and other psychiatric problems following exposure to traumatic events during war. A traumatized identity can develop, of which life-threatening experiences is a dominant feature, leading to suppression of depression with associated psychological distress.
Nonlinear Pauli susceptibilities in Sr 3 Ru 2 O 7 and universal features of itinerant metamagnetism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shivaram, B. S.; Luo, Jing; Chern, Gia-Wei
We report, for the first time, measurements of the third order, x 3 and fifth order, x 5, susceptibilities in an itinerant oxide metamagnet, Sr 3Ru 2O 7 for magnetic fields both parallel and perpendicular to the c-axis. These susceptibilities exhibit maxima in their temperature dependence such that T 1 ≈ 2T 3 ≈ 4T 5 where the T i are the position in temperature where a peak in the i-th order susceptibility occurs. These features taken together with the scaling of the critical field with the temperature T 1 observed in a diverse variety of itinerant metamagnets find amore » natural explanation in a single band model with one Van Hove singularity (VHS) and onsite repulsion U. The separation of the VHS from the Fermi energy V, sets a single energy scale, which is the primary driver for the observed features of itinerant metamagnetism at low temperatures.« less
Modular Engine Noise Component Prediction System (MCP) Program Users' Guide
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Herkes, William H.; Reed, David H.
2004-01-01
This is a user's manual for Modular Engine Noise Component Prediction System (MCP). This computer code allows the user to predict turbofan engine noise estimates. The program is based on an empirical procedure that has evolved over many years at The Boeing Company. The data used to develop the procedure include both full-scale engine data and small-scale model data, and include testing done by Boeing, by the engine manufacturers, and by NASA. In order to generate a noise estimate, the user specifies the appropriate engine properties (including both geometry and performance parameters), the microphone locations, the atmospheric conditions, and certain data processing options. The version of the program described here allows the user to predict three components: inlet-radiated fan noise, aft-radiated fan noise, and jet noise. MCP predicts one-third octave band noise levels over the frequency range of 50 to 10,000 Hertz. It also calculates overall sound pressure levels and certain subjective noise metrics (e.g., perceived noise levels).
Nonlinear Pauli susceptibilities in Sr 3 Ru 2 O 7 and universal features of itinerant metamagnetism
Shivaram, B. S.; Luo, Jing; Chern, Gia-Wei; ...
2018-03-12
We report, for the first time, measurements of the third order, x 3 and fifth order, x 5, susceptibilities in an itinerant oxide metamagnet, Sr 3Ru 2O 7 for magnetic fields both parallel and perpendicular to the c-axis. These susceptibilities exhibit maxima in their temperature dependence such that T 1 ≈ 2T 3 ≈ 4T 5 where the T i are the position in temperature where a peak in the i-th order susceptibility occurs. These features taken together with the scaling of the critical field with the temperature T 1 observed in a diverse variety of itinerant metamagnets find amore » natural explanation in a single band model with one Van Hove singularity (VHS) and onsite repulsion U. The separation of the VHS from the Fermi energy V, sets a single energy scale, which is the primary driver for the observed features of itinerant metamagnetism at low temperatures.« less
Financing Solar PV at Government Sites with PPAs and Public Debt (Brochure)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-11-01
Historically, state and local governmental agencies have employed one of two models to deploy solar photovoltaic (PV) projects: (1) self-ownership (financed through a variety of means) or (2) third-party ownership through a power purchase agreement (PPA). Morris County, New Jersey, administrators recently pioneered a way to combine many of the benefits of self-ownership and third-party PPAs through a bond-PPA hybrid, frequently referred to as the Morris Model. At the request of the Department of Energy's Solar Market Transformation group, NREL examined the hybrid model. This fact sheet describes how the hybrid model works, assesses the model's relative advantages and challengesmore » as compared to self-ownership and the third-party PPA model, provides a quick guide to project implementation, and assesses the replicability of the model in other jurisdictions across the United States.« less
Squeezing the halo bispectrum: a test of bias models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dizgah, Azadeh Moradinezhad; Chan, Kwan Chuen; Noreña, Jorge
We study the halo-matter cross bispectrum in the presence of primordial non-Gaussianity of the local type. We restrict ourselves to the squeezed limit, for which the calculation are straightforward, and perform the measurements in the initial conditions of N-body simulations, to mitigate the contamination induced by nonlinear gravitational evolution. Interestingly, the halo-matter cross bispectrum is not trivial even in this simple limit as it is strongly sensitive to the scale-dependence of the quadratic and third-order halo bias. Therefore, it can be used to test biasing prescriptions. We consider three different prescription for halo clustering: excursion set peaks (ESP), local biasmore » and a model in which the halo bias parameters are explicitly derived from a peak-background split. In all cases, the model parameters are fully constrained with statistics other than the cross bispectrum. We measure the cross bispectrum involving one halo fluctuation field and two mass overdensity fields for various halo masses and collapse redshifts. We find that the ESP is in reasonably good agreement with the numerical data, while the other alternatives we consider fail in various cases. This suggests that the scale-dependence of halo bias also is a crucial ingredient to the squeezed limit of the halo bispectrum.« less
Scaling water and energy fluxes in climate systems - Three land-atmospheric modeling experiments
NASA Technical Reports Server (NTRS)
Wood, Eric F.; Lakshmi, Venkataraman
1993-01-01
Three numerical experiments that investigate the scaling of land-surface processes - either of the inputs or parameters - are reported, and the aggregated processes are compared to the spatially variable case. The first is the aggregation of the hydrologic response in a catchment due to rainfall during a storm event and due to evaporative demands during interstorm periods. The second is the spatial and temporal aggregation of latent heat fluxes, as calculated from SiB. The third is the aggregation of remotely sensed land vegetation and latent and sensible heat fluxes using TM data from the FIFE experiment of 1987 in Kansas. In all three experiments it was found that the surface fluxes and land characteristics can be scaled, and that macroscale models based on effective parameters are sufficient to account for the small-scale heterogeneities investigated.
WISC-R Short Forms: Long on Problems.
ERIC Educational Resources Information Center
Boyd, Thomas A.; Tramontana, Michael G.
To examine the validity of short forms of the Wechsler Intelligence Scale for Children-Revised (WISC-R), the WISC-R was first administered to 106 hospitalized psychiatric patients, aged 8-16. No subjects had a primary diagnosis of mental retardation or learning disability, and one-third were receiving psychotropic medication. WISC-R IQ scores…
DOT National Transportation Integrated Search
2009-02-25
The combination of current and planned 2007 U.S. ethanol production capacity is 50 billion L/yr, one-third of the Energy Independence and Security Act of 2007 (EISA) target of 136 billion L of biofuels by 2022. In this study, we evaluate transportati...
Promoting country ownership and stewardship of health programs: The global fund experience.
Atun, Rifat; Kazatchkine, Michel
2009-11-01
The Global Fund to Fight AIDS, Tuberculosis and Malaria was established in 2002 to provide large-scale financing to middle- and low-income countries to intensify the fight against the 3 diseases. Its model has enabled strengthening of local health leadership to improve governance of HIV programs in 5 ways. First, the Global Fund has encouraged development of local capacity to generate technically sound proposals reflecting country needs and priorities. Second, through dual-track financing-where countries are encouraged to nominate at least one government and one nongovernment principal recipient to lead program implementation-the Global Fund has enabled civil society and other nongovernmental organizations to play a critical role in the design, implementation, and oversight of HIV programs. Third, investments to strengthen community systems have enabled greater involvement of community leaders in effective mobilization of demand and scale-up for services to reach vulnerable groups. Fourth, capacity building outside the state sector has improved community participation in governance of public health. Finally, an emphasis on inclusiveness and diversity in planning, implementation, and oversight has broadly enhanced country coordination capacity. Strengthening local leadership capacity and governance are critical to building efficient and equitable health systems to deliver universal coverage of HIV services.
Evolution of the social network of scientific collaborations
NASA Astrophysics Data System (ADS)
Barabási, A. L.; Jeong, H.; Néda, Z.; Ravasz, E.; Schubert, A.; Vicsek, T.
2002-08-01
The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it offers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991-98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network's time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks.
Al-Sakkaf, Khaled Abdulla; Basaleem, Huda Omer
2016-01-01
The incidence of breast cancer is rapidly increasing in Yemen with recent indications of constituting one-third of female cancers. The main problem in Yemen remains very late presentation of breast cancer, most of which should have been easily recognisable. Since stage of disease at diagnosis is the most important prognostic variable, early diagnosis is an important option to be considered for control of breast cancer in low resourced settings like Yemen. In the present study, we aimed at describing breast cancer knowledge, perceptions and breast self-examination (BSE) practices among a sample of Yemeni women. This cross-sectional study covered 400 women attending four reproductive health centres in Aden, Yemen through face-to-face interview using a structured questionnaire during April - July 2014. We collected data on sociodemographic characteristics, knowledge about breast cancer, and screening practices as well as respondents' perceptions based on the five sub scales of the Health Belief Model (HBM): perceived susceptibility; perceived severity; perceived barriers; perceived benefits; and self-efficacy. The response format was a fivepoint Likert scale. Statistical Package for Social Sciences (SPSS 20) was used for statistical analysis. Statistical significance was set at p<0.05. Logistic regression analysis was conducted with BSE as a dependent variable. The mean age of women was 26.5 (S.D=5.6) years. The majority (89.0%) had never ever performed any screening. Two-thirds of respondents had poor knowledge. Perceived BSE benefits and self-efficacy and lower BSE barriers perception were significant independent predictors of BSE practice. Poor knowledge and inadequate BSE practices are prevailing in Yemen. The need for implementing culturally sensitive targeted education measures is mandatory in the effort to improve early detection and reduce the burden of breast cancer.
A three-stage birandom program for unit commitment with wind power uncertainty.
Zhang, Na; Li, Weidong; Liu, Rao; Lv, Quan; Sun, Liang
2014-01-01
The integration of large-scale wind power adds a significant uncertainty to power system planning and operating. The wind forecast error is decreased with the forecast horizon, particularly when it is from one day to several hours ahead. Integrating intraday unit commitment (UC) adjustment process based on updated ultra-short term wind forecast information is one way to improve the dispatching results. A novel three-stage UC decision method, in which the day-ahead UC decisions are determined in the first stage, the intraday UC adjustment decisions of subfast start units are determined in the second stage, and the UC decisions of fast-start units and dispatching decisions are determined in the third stage is presented. Accordingly, a three-stage birandom UC model is presented, in which the intraday hours-ahead forecasted wind power is formulated as a birandom variable, and the intraday UC adjustment event is formulated as a birandom event. The equilibrium chance constraint is employed to ensure the reliability requirement. A birandom simulation based hybrid genetic algorithm is designed to solve the proposed model. Some computational results indicate that the proposed model provides UC decisions with lower expected total costs.
Hydro-climatic forcing of dissolved organic carbon in two boreal lakes of Canada.
Diodato, Nazzareno; Higgins, Scott; Bellocchi, Gianni; Fiorillo, Francesco; Romano, Nunzio; Guadagno, Francesco M
2016-11-15
The boreal forest of the northern hemisphere represents one of the world's largest ecozones and contains nearly one third of the world's intact forests and terrestrially stored carbon. Long-term variations in temperature and precipitation have been implied in altering carbon cycling in forest soils, including increased fluxes to receiving waters. In this study, we use a simple hydrologic model and a 40-year dataset (1971-2010) of dissolved organic carbon (DOC) from two pristine boreal lakes (ELA, Canada) to examine the interactions between precipitation and landscape-scale controls of DOC production and export from forest catchments to surface waters. Our results indicate that a simplified hydrologically-based conceptual model can enable the long-term temporal patterns of DOC fluxes to be captured within boreal landscapes. Reconstructed DOC exports from forested catchments in the period 1901-2012 follow largely a sinusoidal pattern, with a period of about 37years and are tightly linked to multi-decadal patterns of precipitation. By combining our model with long-term precipitation estimates, we found no evidence of increasing DOC transport or in-lake concentrations through the 20th century. Copyright © 2016 Elsevier B.V. All rights reserved.
Model equations for the Eiffel Tower profile: historical perspective and new results
NASA Astrophysics Data System (ADS)
Weidman, Patrick; Pinelis, Iosif
2004-07-01
Model equations for the shape of the Eiffel Tower are investigated. One model purported to be based on Eiffel's writing does not give a tower with the correct curvature. A second popular model not connected with Eiffel's writings provides a fair approximation to the tower's skyline profile of 29 contiguous panels. Reported here is a third model derived from Eiffel's concern about wind loads on the tower, as documented in his communication to the French Civil Engineering Society on 30 March 1885. The result is a nonlinear, integro-differential equation which is solved to yield an exponential tower profile. It is further verified that, as Eiffel wrote, "in reality the curve exterior of the tower reproduces, at a determined scale, the same curve of the moments produced by the wind". An analysis of the actual tower profile shows that it is composed of two piecewise continuous exponentials with different growth rates. This is explained by specific safety factors for wind loading that Eiffel & Company incorporated in the design of the free-standing tower. To cite this article: P. Weidman, I. Pinelis, C. R. Mecanique 332 (2004).
Ensemble Kalman filters for dynamical systems with unresolved turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.
Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less
Munition Burial by Local Scour and Sandwaves: large-scale laboratory experiments
NASA Astrophysics Data System (ADS)
Garcia, M. H.
2017-12-01
Our effort has been the direct observation and monitoring of the burial process of munitions induced by the combined action of waves, currents and pure oscillatory flows. The experimental conditions have made it possible to observe the burial process due to both local scour around model munitions as well as the passage of sandwaves. One experimental facility is the Large Oscillating Water Sediment Tunnel (LOWST) constructed with DURIP support. LOWST can reproduce field-like conditions near the sea bed. The second facility is a multipurpose wave-current flume which is 4 feet (1.20 m) deep, 6 feet (1.8 m) wide, and 161 feet (49.2 m) long. More than two hundred experiments were carried out in the wave-current flume. The main task completed within this effort has been the characterization of the burial process induced by local scour as well in the presence of dynamic sandwaves with superimposed ripples. It is found that the burial of a finite-length model munition (cylinder) is determined by local scour around the cylinder and by a more global process associated with the formation and evolution of sandwaves having superimposed ripples on them. Depending on the ratio of the amplitude of these features and the body's diameter (D), a model munition can progressively get partially or totally buried as such bedforms migrate. Analysis of the experimental data indicates that existing semi-empirical formulae for prediction of equilibrium-burial-depth, geometry of the scour hole around a cylinder, and time-scales developed for pipelines are not suitable for the case of a cylinder of finite length. Relative burial depth (Bd / D) is found to be mainly a function of two parameters. One is the Keulegan-Carpenter number, KC, and the Shields parameter, θ. Munition burial under either waves or combined flow, is influenced by two different processes. One is related to the local scour around the object, which takes place within the first few hundred minutes of flow action (i.e. short time scale). 2nd process is related to the development of sandwaves which in turn may partially or totally cover a given mine as they migrate (i.e. long time scales), leading to global burial. A third process occurring at a much shorter time scale is related to fluidization. Existing formulations for munition burial do not account for long sandwaves as well as bed fluidization.
FVCOM one-way and two-way nesting using ESMF: Development and validation
NASA Astrophysics Data System (ADS)
Qi, Jianhua; Chen, Changsheng; Beardsley, Robert C.
2018-04-01
Built on the Earth System Modeling Framework (ESMF), the one-way and two-way nesting methods were implemented into the unstructured-grid Finite-Volume Community Ocean Model (FVCOM). These methods help utilize the unstructured-grid multi-domain nesting of FVCOM with an aim at resolving the multi-scale physical and ecosystem processes. A detail of procedures on implementing FVCOM into ESMF was described. The experiments were made to validate and evaluate the performance of the nested-grid FVCOM system. The first was made for a wave-current interaction case with a two-domain nesting with an emphasis on qualifying a critical need of nesting to resolve a high-resolution feature near the coast and harbor with little loss in computational efficiency. The second was conducted for the pseudo river plume cases to examine the differences in the model-simulated salinity between one-way and two-way nesting approaches and evaluate the performance of mass conservative two-way nesting method. The third was carried out for the river plume case in the realistic geometric domain in Mass Bay, supporting the importance for having the two-way nesting for coastal-estuarine integrated modeling. The nesting method described in this paper has been used in the Northeast Coastal Ocean Forecast System (NECOFS)-a global-regional-coastal nesting FVCOM system that has been placed into the end-to-end forecast and hindcast operations since 2007.
ERIC Educational Resources Information Center
Canivez, Gary L.; Neitzel, Ryan; Martin, Blake E.
2005-01-01
The present study reports data supporting the construct validity of the Kaufman Brief Intelligence Test (K-BIT; Kaufman & Kaufman, 1990), the Wechsler Intelligence Scale for Children-Third Edition (WISC-III; Wechsler, 1991), and the Adjustment Scales for Children and Adolescents (ASCA; McDermott, Marston, & Stott, 1993) through convergent…
Characteristics of Third-Grade Learning Disabled Children.
ERIC Educational Resources Information Center
Cullen, Joy L.; And Others
1981-01-01
The Developmental Test of Visual-Motor Integration, the Wide Range Achievement Test, and the Student's Perception of Ability Scale were administreed to 70 learning disabled and 73 normally achieving third-grade children who had been stratified on full scale Wechsler Intelligence Scale for Children-Revised (WISC-R) IQ scores. (Author)
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
AQMEII3: the EU and NA regional scale program of the ...
The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur
[A new scale for measuring return-to-work motivation of mentally ill employees].
Poersch, M
2007-03-01
A new scale "motivation for return to work" has been constructed to measure depressive patients' motivation to start working again in a stepwise process. The scale showed in 46 patients of a first case management (CM) sample with depressive employees a good correlation with the final social status of the CM. Only the motivated patients were successful returning to work and could be, separated clearly from the most demotivated one. Second, the scale correlated with the duration of sick leave and third showed an inverse correlation with the complete time of CM, suggesting that a successful stepwise return to work requires time. These first results need further examination.
Mena, Jorge Humberto; Sanchez, Alvaro Ignacio; Rubiano, Andres M.; Peitzman, Andrew B.; Sperry, Jason L.; Gutierrez, Maria Isabel; Puyana, Juan Carlos
2011-01-01
Objective The Glasgow Coma Scale (GCS) classifies Traumatic Brain Injuries (TBI) as Mild (14–15); Moderate (9–13) or Severe (3–8). The ATLS modified this classification so that a GCS score of 13 is categorized as mild TBI. We investigated the effect of this modification on mortality prediction, comparing patients with a GCS of 13 classified as moderate TBI (Classic Model) to patients with GCS of 13 classified as mild TBI (Modified Model). Methods We selected adult TBI patients from the Pennsylvania Outcome Study database (PTOS). Logistic regressions adjusting for age, sex, cause, severity, trauma center level, comorbidities, and isolated TBI were performed. A second evaluation included the time trend of mortality. A third evaluation also included hypothermia, hypotension, mechanical ventilation, screening for drugs, and severity of TBI. Discrimination of the models was evaluated using the area under receiver operating characteristic curve (AUC). Calibration was evaluated using the Hoslmer-Lemershow goodness of fit (GOF) test. Results In the first evaluation, the AUCs were 0.922 (95 %CI, 0.917–0.926) and 0.908 (95 %CI, 0.903–0.912) for classic and modified models, respectively. Both models showed poor calibration (p<0.001). In the third evaluation, the AUCs were 0.946 (95 %CI, 0.943 – 0.949) and 0.938 (95 %CI, 0.934 –0.940) for the classic and modified models, respectively, with improvements in calibration (p=0.30 and p=0.02 for the classic and modified models, respectively). Conclusion The lack of overlap between ROC curves of both models reveals a statistically significant difference in their ability to predict mortality. The classic model demonstrated better GOF than the modified model. A GCS of 13 classified as moderate TBI in a multivariate logistic regression model performed better than a GCS of 13 classified as mild. PMID:22071923
Developing perturbations for Climate Change Impact Assessments
NASA Astrophysics Data System (ADS)
Hewitson, Bruce
Following the 2001 Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report [TAR; IPCC, 2001], and the paucity of climate change impact assessments from developing nations, there has been a significant growth in activities to redress this shortcoming. However, undertaking impact assessments (in relation to malaria, crop stress, regional water supply, etc.) is contingent on available climate-scale scenarios at time and space scales of relevance to the regional issues of importance. These scales are commonly far finer than even the native resolution of the Global Climate Models (GCMs) (the principal tools for climate change research), let alone the skillful resolution (scales of aggregation at which GCM observational error is acceptable for a given application) of GCMs.Consequently, there is a growing demand for regional-scale scenarios, which in turn are reliant on techniques to downscale from GCMs, such as empirical downscaling or nested Regional Climate Models (RCMs). These methods require significant skill, experiential knowledge, and computational infrastructure in order to derive credible regional-scale scenarios. In contrast, it is often the case that impact assessment researchers in developing nations have inadequate resources with limited access to scientists in the broader international scientific community who have the time and expertise to assist. However, where developing effective downscaled scenarios is problematic, it is possible that much useful information can still be obtained for impact assessments by examining the system sensitivity to largerscale climate perturbations. Consequently, one may argue that the early phase of assessing sensitivity and vulnerability should first be characterized by evaluation of the first-order impacts, rather than immediately addressing the finer, secondary factors that are dependant on scenarios derived through downscaling.
Third floor, showing monorail system, scale (center left), alum dump ...
Third floor, showing monorail system, scale (center left), alum dump bucket, and dissolving tanks - Division Avenue Pumping Station & Filtration Plant, West 45th Street and Division Avenue, Cleveland, Cuyahoga County, OH
Ellinas, Christos; Allan, Neil; Durugbo, Christopher; Johansson, Anders
2015-01-01
Current societal requirements necessitate the effective delivery of complex projects that can do more while using less. Yet, recent large-scale project failures suggest that our ability to successfully deliver them is still at its infancy. Such failures can be seen to arise through various failure mechanisms; this work focuses on one such mechanism. Specifically, it examines the likelihood of a project sustaining a large-scale catastrophe, as triggered by single task failure and delivered via a cascading process. To do so, an analytical model was developed and tested on an empirical dataset by the means of numerical simulation. This paper makes three main contributions. First, it provides a methodology to identify the tasks most capable of impacting a project. In doing so, it is noted that a significant number of tasks induce no cascades, while a handful are capable of triggering surprisingly large ones. Secondly, it illustrates that crude task characteristics cannot aid in identifying them, highlighting the complexity of the underlying process and the utility of this approach. Thirdly, it draws parallels with systems encountered within the natural sciences by noting the emergence of self-organised criticality, commonly found within natural systems. These findings strengthen the need to account for structural intricacies of a project's underlying task precedence structure as they can provide the conditions upon which large-scale catastrophes materialise.
Aspects of Scale Invariance in Physics and Biology
NASA Astrophysics Data System (ADS)
Alba, Vasyl
We study three systems that have scale invariance. The first system is a conformal field theory in d > 3 dimensions. We prove that if there is a unique stress-energy tensor and at least one higher-spin conserved current in the theory, then the correlation functions of the stress-energy tensors and the conserved currents of higher-spin must coincide with one of the following possibilities: a) a theory of n free bosons, b) a theory of n free fermions or c) a theory of n (d-2)/2-forms. The second system is the primordial gravitational wave background in a theory with inflation. We show that the scale invariant spectrum of primordial gravitational waves is isotropic only in the zero-order approximation, and it gets a small correction due to the primordial scalar fluctuations. When anisotropy is measured experimentally, our result will allow us to distinguish between different inflationary models. The third system is a biological system. The question we are asking is whether there is some simplicity or universality underlying the complexities of natural animal behavior. We use the walking fruit fly (Drosophila melanogaster) as a model system. Based on the result that unsupervised flies' behaviors can be categorized into one hundred twenty-two discrete states (stereotyped movements), which all individuals from a single species visit repeatedly, we demonstrated that the sequences of states are strongly non-Markovian. In particular, correlations persist for an order of magnitude longer than expected from a model of random state-to-state transitions. The correlation function has a power-law decay, which is a hint of some kind of criticality in the system. We develop a generalization of the information bottleneck method that allows us to cluster these states into a small number of clusters. This more compact description preserves a lot of temporal correlation. We found that it is enough to use a two-cluster representation of the data to capture long-range correlations, which opens a way for a more quantitative description of the system. Usage of the maximal entropy method allowed us to find a description that closely resembles a famous inverse-square Ising model in 1d in a small magnetic field.
ERIC Educational Resources Information Center
Kawasaki, Keiko; Herrenkohl, Leslie Rupert; Yeary, Sherry A.
2004-01-01
The purpose of this paper is to carefully examine the evolution of students' theory building and modeling, critical components of scientific epistemologies, over a unit of study on sinking and floating in one third/fourth grade classroom. The study described in this paper follows in the tradition of Design Experiments (Brown 1992, Collins 1990)…
NASA Technical Reports Server (NTRS)
Graham, John B., Jr.
1958-01-01
Heat-transfer and pressure measurements were obtained from a flight test of a 1/18-scale model of the Titan intercontinental ballistic missile up to a Mach number of 3.86 and Reynolds number per foot of 23.5 x 10(exp 6) and are compared with the data of two previously tested 1/18-scale models. Boundary-layer transition was observed on the nose of the model. Van Driest's theory predicted heat-transfer coefficients reasonably well for the fully laminar flow but predictions made by Van Driest's theory for turbulent flow were considerably higher than the measurements when the skin was being heated. Comparison with the flight test of two similar models shows fair repeatability of the measurements for fully laminar or turbulent flow.
Diverse Formation Mechanisms for Compact Galaxies
NASA Astrophysics Data System (ADS)
Kim, Jin-Ah; Paudel, Sanjaya; Yoon, Suk-Jin
2018-01-01
Compact, quenched galaxies such as M32 are unusual ones located off the mass - size scaling relation defined by normal galaxies. Still, their formation mechanisms remain unsolved. Here we investigate the evolution of ~100 compact, quenched galaxies at z = 0 identified in the Illustris cosmological simulation. We identify three ways for a galaxy to become a compact one and, often, multiple mechanisms operate in a combined manner. First, stripping is responsible for making about a third of compact galaxies. Stripping removes stars from galaxies, usually while keeping their sizes intact. About one third are galaxies that cease their growth early on after entering into more massive, gigantic halos. Finally, about half of compact galaxies, ~ 35 % of which turn out to undergo stripping, experience the compaction due to the highly centrally concentrated star formation. We discuss the evolutionary path of compact galaxies on the mass – size plane for each mechanism in a broader context of dwarf galaxy formation and evolution.
Two-thirds of global cropland area impacted by climate oscillations.
Heino, Matias; Puma, Michael J; Ward, Philip J; Gerten, Dieter; Heck, Vera; Siebert, Stefan; Kummu, Matti
2018-03-28
The El Niño Southern Oscillation (ENSO) peaked strongly during the boreal winter 2015-2016, leading to food insecurity in many parts of Africa, Asia and Latin America. Besides ENSO, the Indian Ocean Dipole (IOD) and the North Atlantic Oscillation (NAO) are known to impact crop yields worldwide. Here we assess for the first time in a unified framework the relationships between ENSO, IOD and NAO and simulated crop productivity at the sub-country scale. Our findings reveal that during 1961-2010, crop productivity is significantly influenced by at least one large-scale climate oscillation in two-thirds of global cropland area. Besides observing new possible links, especially for NAO in Africa and the Middle East, our analyses confirm several known relationships between crop productivity and these oscillations. Our results improve the understanding of climatological crop productivity drivers, which is essential for enhancing food security in many of the most vulnerable places on the planet.
An assessment of the microgravity and acoustic environments in Space Station Freedom using VAPEPS
NASA Technical Reports Server (NTRS)
Bergen, Thomas F.; Scharton, Terry D.; Badilla, Gloria A.
1992-01-01
The Vibroacoustic Payload Environment Prediction System (VAPEPS) was used to predict the stationary on-orbit environments in one of the Space Station Freedom modules. The model of the module included the outer structure, equipment and payload racks, avionics, and cabin air and duct systems. Acoustic and vibratory outputs of various source classes were derived and input to the model. Initial results of analyses, performed in one-third octave frequency bands from 10 to 10,000 Hz, show that both the microgravity and acoustic environments will be exceeded in some one-third octave bands with the current SSF design. Further analyses indicate that interior acoustic level requirements will be exceeded even if the microgravity requirements are met.
Liu, Shuguang; Bond-Lamberty, Ben; Hicke, Jeffrey A.; Vargas, Rodrigo; Zhao, Shuqing; Chen, Jing; Edburg, Steven L.; Hu, Yueming; Liu, Jinxun; McGuire, A. David; Xiao, Jingfeng; Keane, Robert; Yuan, Wenping; Tang, Jianwu; Luo, Yiqi; Potter, Christopher; Oeding, Jennifer
2011-01-01
Forest disturbances greatly alter the carbon cycle at various spatial and temporal scales. It is critical to understand disturbance regimes and their impacts to better quantify regional and global carbon dynamics. This review of the status and major challenges in representing the impacts of disturbances in modeling the carbon dynamics across North America revealed some major advances and challenges. First, significant advances have been made in representation, scaling, and characterization of disturbances that should be included in regional modeling efforts. Second, there is a need to develop effective and comprehensive process‐based procedures and algorithms to quantify the immediate and long‐term impacts of disturbances on ecosystem succession, soils, microclimate, and cycles of carbon, water, and nutrients. Third, our capability to simulate the occurrences and severity of disturbances is very limited. Fourth, scaling issues have rarely been addressed in continental scale model applications. It is not fully understood which finer scale processes and properties need to be scaled to coarser spatial and temporal scales. Fifth, there are inadequate databases on disturbances at the continental scale to support the quantification of their effects on the carbon balance in North America. Finally, procedures are needed to quantify the uncertainty of model inputs, model parameters, and model structures, and thus to estimate their impacts on overall model uncertainty. Working together, the scientific community interested in disturbance and its impacts can identify the most uncertain issues surrounding the role of disturbance in the North American carbon budget and develop working hypotheses to reduce the uncertainty
Hierarchical fermions and detectable Z' from effective two-Higgs-triplet 3-3-1 model
NASA Astrophysics Data System (ADS)
Barreto, E. R.; Dias, A. G.; Leite, J.; Nishi, C. C.; Oliveira, R. L. N.; Vieira, W. C.
2018-03-01
We develop a SU (3 )C⊗SU (3 )L⊗U (1 )X model where the number of fermion generations is fixed by cancellation of gauge anomalies, being a type of 3-3-1 model with new charged leptons. Similarly to the economical 3-3-1 models, symmetry breaking is achieved effectively with two scalar triplets so that the spectrum of scalar particles at the TeV scale contains just two C P even scalars, one of which is the recently discovered Higgs boson, plus a charged scalar. Such a scalar sector is simpler than the one in the Two Higgs Doublet Model, hence more attractive for phenomenological studies, and has no flavor changing neutral currents (FCNC) mediated by scalars except for the ones induced by the mixing of Standard Model (SM) fermions with heavy fermions. We identify a global residual symmetry of the model which guarantees mass degeneracies and some massless fermions whose masses need to be generated by the introduction of effective operators. The fermion masses so generated require less fine-tuning for most of the SM fermions and FCNC are naturally suppressed by the small mixing between the third family of quarks and the rest. The effective setting is justified by an ultraviolet completion of the model from which the effective operators emerge naturally. A detailed particle mass spectrum is presented, and an analysis of the Z' production at the LHC run II is performed to show that it could be easily detected by considering the invariant mass and transverse momentum distributions in the dimuon channel.
ERIC Educational Resources Information Center
Wilkerson, Trena L.; Bryan, Tommy; Curry, Jane
2012-01-01
This article describes how using candy bars as models gives sixth-grade students a taste for learning to represent fractions whose denominators are factors of twelve. Using paper models of the candy bars, students explored and compared fractions. They noticed fewer different representations for one-third than for one-half. The authors conclude…
Statistical characterization of short wind waves from stereo images of the sea surface
NASA Astrophysics Data System (ADS)
Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine
2013-04-01
We propose a methodology to extract short-scale statistical characteristics of the sea surface topography by means of stereo image reconstruction. The possibilities and limitations of the technique are discussed and tested on a data set acquired from an oceanographic platform at the Black Sea. The analysis shows that reconstruction of the topography based on stereo method is an efficient way to derive non-trivial statistical properties of surface short- and intermediate-waves (say from 1 centimer to 1 meter). Most technical issues pertaining to this type of datasets (limited range of scales, lacunarity of data or irregular sampling) can be partially overcome by appropriate processing of the available points. The proposed technique also allows one to avoid linear interpolation which dramatically corrupts properties of retrieved surfaces. The processing technique imposes that the field of elevation be polynomially detrended, which has the effect of filtering out the large scales. Hence the statistical analysis can only address the small-scale components of the sea surface. The precise cut-off wavelength, which is approximatively half the patch size, can be obtained by applying a high-pass frequency filter on the reference gauge time records. The results obtained for the one- and two-points statistics of small-scale elevations are shown consistent, at least in order of magnitude, with the corresponding gauge measurements as well as other experimental measurements available in the literature. The calculation of the structure functions provides a powerful tool to investigate spectral and statistical properties of the field of elevations. Experimental parametrization of the third-order structure function, the so-called skewness function, is one of the most important and original outcomes of this study. This function is of primary importance in analytical scattering models from the sea surface and was up to now unavailable in field conditions. Due to the lack of precise reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.
[Factor structure validity of the social capital scale used at baseline in the ELSA-Brasil study].
Souto, Ester Paiva; Vasconcelos, Ana Glória Godoi; Chor, Dora; Reichenheim, Michael E; Griep, Rosane Härter
2016-07-21
This study aims to analyze the factor structure of the Brazilian version of the Resource Generator (RG) scale, using baseline data from the Brazilian Longitudinal Health Study in Adults (ELSA-Brasil). Cross-validation was performed in three random subsamples. Exploratory factor analysis using exploratory structural equation models was conducted in the first two subsamples to diagnose the factor structure, and confirmatory factor analysis was used in the third to corroborate the model defined by the exploratory analyses. Based on the 31 initial items, the model with the best fit included 25 items distributed across three dimensions. They all presented satisfactory convergent validity (values greater than 0.50 for the extracted variance) and precision (values greater than 0.70 for compound reliability). All factor correlations were below 0.85, indicating full discriminative factor validity. The RG scale presents acceptable psychometric properties and can be used in populations with similar characteristics.
A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT
2007-01-30
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real worldmore » instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.« less
Narrative of a Cross-Cultural Language Teaching Experience: Conflicts between Theory and Practice
ERIC Educational Resources Information Center
Yang, Shih-hsien
2008-01-01
China's economic performance over the past few decades has put China in a position where it now accounts for one-third of global economic growth, twice as much as America. The large-scale growth of China's economy has attracted attention from businesses and investors worldwide [Morrison (2006). "China's economic conditions."…
26 CFR 1.509(a)-3 - Broadly, publicly supported organizations.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-more-than-one-third support test are designed to insure that an organization which is excluded from... from time to time. At all times, the operations of Y were carried out on a small scale, usually being... the general public. At the time of B's death, no person standing in a relationship to B described in...
Using Self-Assessments to Detect Workshop Success: Do They Work?
ERIC Educational Resources Information Center
D'Eon, Marcel; Sadownik, Leslie; Harrison, Alexandra; Nation, Jill
2008-01-01
An accepted gold standard for measuring change in participant behavior is third-party observation. This method is highly resource intensive, and many small-scale evaluations may not be in a position to use this approach. This study was designed to assess the validity and reliably of aggregated group self-assessments as one way to measure workshop…
WRF/CMAQ AQMEII3 Simulations of US Regional-Scale ...
Chemical boundary conditions are a key input to regional-scale photochemical models. In this study, performed during the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3), we perform annual simulations over North America with chemical boundary conditions prepared from four different global models. Results indicate that the impacts of different boundary conditions are significant for ozone throughout the year and most pronounced outside the summer season. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
A generalized Uhlenbeck and Beth formula for the third cluster coefficient
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Sigurd Yves; Lassaut, Monique; Amaya-Tapia, Alejandro, E-mail: jano@icf.unam.mx
2016-11-15
Relatively recently (Amaya-Tapia et al., 2011), we presented a formula for the evaluation of the third Bose fugacity coefficient–leading to the third virial coefficient–in terms of three-body eigenphase shifts, for particles subject to repulsive forces. An analytical calculation for a 1-dim. model, for which the result is known, confirmed the validity of this approach. We now extend the formalism to particles with attractive forces, and therefore must allow for the possibility that the particles have bound states. We thus obtain a true generalization of the famous formula of Uhlenbeck and Beth (Uhlenbeck and Beth, 1936; Beth and Uhlenbeck, 1937) and ofmore » Gropper (Gropper, 1936, 1937) for the second virial. We illustrate our formalism by a calculation, in an adiabatic approximation, of the third cluster in one dimension, using McGuire’s model as in our previous paper, but with attractive forces. The inclusion of three-body bound states is trivial; taking into account states having asymptotically two particles bound, and one free, is not.« less
NASA Astrophysics Data System (ADS)
Khomchenko, V. G.; Varepo, L. G.; Glukhov, V. I.; Krivokhatko, E. A.
2017-06-01
The geometric model for the synthesis of third-class lever mechanisms is proposed, which allows, by changing the length of the auxiliary link and the position of its fixed hinge, to rearrange the movement of the working organ onto the cyclograms with different predetermined dwell times. It is noted that with the help of the proposed model, at the expense of the corresponding geometric constructions, the best uniform Chebyshev approximation can be achieved at the interval of the standstill.
Current limiter circuit system
Witcher, Joseph Brandon; Bredemann, Michael V.
2017-09-05
An apparatus comprising a steady state sensing circuit, a switching circuit, and a detection circuit. The steady state sensing circuit is connected to a first, a second and a third node. The first node is connected to a first device, the second node is connected to a second device, and the steady state sensing circuit causes a scaled current to flow at the third node. The scaled current is proportional to a voltage difference between the first and second node. The switching circuit limits an amount of current that flows between the first and second device. The detection circuit is connected to the third node and the switching circuit. The detection circuit monitors the scaled current at the third node and controls the switching circuit to limit the amount of the current that flows between the first and second device when the scaled current is greater than a desired level.
Dori, Galit A; Chelune, Gordon J
2004-06-01
The Wechsler Adult Intelligence Scale--Third Edition (WAIS-III; D. Wechsler, 1997a) and the Wechsler Memory Scale--Third Edition (WMS-III; D. Wechsler, 1997b) are 2 of the most frequently used measures in psychology and neuropsychology. To facilitate the diagnostic use of these measures in the clinical decision-making process, this article provides information on education-stratified, directional prevalence rates (i.e., base rates) of discrepancy scores between the major index scores for the WAIS-III, the WMS-III, and between the WAIS-III and WMS-III. To illustrate how such base-rate data can be clinically used, this article reviews the relative risk (i.e., odds ratio) of empirically defined "rare" cognitive deficits in 2 of the clinical samples presented in the WAIS-III--WMS-III Technical Manual (The Psychological Corporation, 1997). ((c) 2004 APA, all rights reserved)
Watanabe, Colin; Cuellar, Trinna L.; Haley, Benjamin
2016-01-01
ABSTRACT Incorporating miRNA-like features into vector-based hairpin scaffolds has been shown to augment small RNA processing and RNAi efficiency. Therefore, defining an optimal, native hairpin context may obviate a need for hairpin-specific targeting design schemes, which confound the movement of functional siRNAs into shRNA/artificial miRNA backbones, or large-scale screens to identify efficacious sequences. Thus, we used quantitative cell-based assays to compare separate third generation artificial miRNA systems, miR-E (based on miR-30a) and miR-3G (based on miR-16-2 and first described in this study) to widely-adopted, first and second generation formats in both Pol-II and Pol-III expression vector contexts. Despite their unique structures and strandedness, and in contrast to first and second-generation RNAi triggers, the third generation formats operated with remarkable similarity to one another, and strong silencing was observed with a significant fraction of the evaluated target sequences within either promoter context. By pairing an established siRNA design algorithm with the third generation vectors we could readily identify targeting sequences that matched or exceeded the potency of those discovered through large-scale sensor-based assays. We find that third generation hairpin systems enable the maximal level of siRNA function, likely through enhanced processing and accumulation of precisely-defined guide RNAs. Therefore, we predict future gains in RNAi potency will come from improved hairpin expression and identification of optimal siRNA-intrinsic silencing properties rather than further modification of these scaffolds. Consequently, third generation systems should be the primary format for vector-based RNAi studies; miR-3G is advantageous due to its small expression cassette and simplified, cost-efficient cloning scheme. PMID:26786363
The Cold Land Processes Experiment (CLPX-1): Analysis and Modelling of LSOS Data (IOP3 Period)
NASA Technical Reports Server (NTRS)
Tedesco, Marco; Kim, Edward J.; Cline, Don; Graf, Tobias; Koike, Toshio; Hardy, Janet; Armstrong, Richard; Brodzik, Mary
2004-01-01
Microwave brightness temperatures at 18.7,36.5, and 89 GHz collected at the Local-Scale Observation Site (LSOS) of the NASA Cold-Land Processes Field Experiment in February, 2003 (third Intensive Observation Period) were simulated using a Dense Media Radiative Transfer model (DMRT), based on the Quasi Crystalline Approximation with Coherent Potential (QCA-CP). Inputs to the model were averaged from LSOS snow pit measurements, although different averages were used for the lower frequencies vs. the highest one, due to the different penetration depths and to the stratigraphy of the snowpack. Mean snow particle radius was computed as a best-fit parameter. Results show that the model was able to reproduce satisfactorily brightness temperatures measured by the University of Tokyo s Ground Based Microwave Radiometer system (CBMR-7). The values of the best-fit snow particle radii were found to fall within the range of values obtained by averaging the field-measured mean particle sizes for the three classes of Small, Medium and Large grain sizes measured at the LSOS site.
Nuclear constraints on the age of the universe
NASA Technical Reports Server (NTRS)
Schramm, D. N.
1982-01-01
A review is made of how one can use nuclear physics to put rather stringent limits on the age of the universe and thus the cosmic distance scale. The age can be estimated to a fair degree of accuracy. No single measurement of the time since the Big Bang gives a specific, unambiguous age. There are several methods that together fix the age with surprising precision. In particular, there are three totally independent techniques for estimating an age and a fourth technique which involves finding consistency of the other three in the framework of the standard Big Bang cosmological model. The three independent methods are: cosmological dynamics, the age of the oldest stars, and radioactive dating. This paper concentrates on the third of the three methods, and the consistency technique.
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
Prediction of the turbulent wake with second-order closure
NASA Technical Reports Server (NTRS)
Taulbee, D. B.; Lumley, J. L.
1981-01-01
A turbulence was envisioned whose energy containing scales would be Gaussian in the absence of inhomogeneity, gravity, etc. An equation was constructed for a function equivalent to the probability density, the second moment of which corresponded to the accepted modeled form of the Reynolds stress equation. The third moment equations obtained from this were simplified by the assumption of weak inhomogeneity. Calculations are presented with this model as well as interpretations of the results.
What happens between pure hydraulic and buckling mechanisms of blowout fractures?
Nagasao, Tomohisa; Miyamoto, Junpei; Shimizu, Yusuke; Jiang, Hua; Nakajima, Tatsuo
2010-06-01
The present study aims to evaluate how the ratio of the hydraulic and buckling mechanisms affects blowout fracture patterns, when these two mechanisms work simultaneously. Three-dimensional computer-aided-design (CAD)models were generated simulating ten skulls. To simulate impact, 1.2J was applied on the orbital region of these models in four patterns. Pattern 1: All the energy works to cause the hydraulic effect. Pattern 2: Two-thirds of the energy works to cause the hydraulic effect; one-third of the energy works to cause the buckling effect. Pattern 3: One-third of the energy works to cause the hydraulic effect; two-thirds of the energy works to cause the buckling effect. Pattern 4: The entire energy quantum works to cause the buckling effect. Using the finite element method, the regions where fractures were theoretically expected to occur were calculated and were compared between the four patterns. More fracture damage occurred for Pattern 1 than Pattern 2, and for Pattern 3 than for Pattern 4. The hydraulic and buckling mechanisms interact with one another. When these two mechanisms are combined, the orbital walls tend to develop serious fractures. Copyright (c) 2009 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Breakdown of the single-exchange approximation in third-order symmetry-adapted perturbation theory.
Lao, Ka Un; Herbert, John M
2012-03-22
We report third-order symmetry-adapted perturbation theory (SAPT) calculations for several dimers whose intermolecular interactions are dominated by induction. We demonstrate that the single-exchange approximation (SEA) employed to derive the third-order exchange-induction correction (E(exch-ind)((30))) fails to quench the attractive nature of the third-order induction (E(ind)((30))), leading to one-dimensional potential curves that become attractive rather than repulsive at short intermolecular separations. A scaling equation for (E(exch-ind)((30))), based on an exact formula for the first-order exchange correction, is introduced to approximate exchange effects beyond the SEA, and qualitatively correct potential energy curves that include third-order induction are thereby obtained. For induction-dominated systems, our results indicate that a "hybrid" SAPT approach, in which a dimer Hartree-Fock calculation is performed in order to obtain a correction for higher-order induction, is necessary not only to obtain quantitative binding energies but also to obtain qualitatively correct potential energy surfaces. These results underscore the need to develop higher-order exchange-induction formulas that go beyond the SEA. © 2012 American Chemical Society
Adams-Chapman, Ira; Bann, Carla M; Vaucher, Yvonne E; Stoll, Barbara J
2013-09-01
To evaluate the relationship between abnormal feeding patterns and language performance on the Bayley Scales of Infant Development-Third Edition at 18-22 months adjusted age among a cohort of extremely premature infants. This is a descriptive analysis of 1477 preterm infants born ≤ 26 weeks gestation or enrolled in a clinical trial between January 1, 2006 and March 18, 2008 at a National Institute of Child Health and Human Development Neonatal Research Network center who completed the 18-month neurodevelopmental follow-up assessment. At 18-22 months adjusted age, a comprehensive neurodevelopmental evaluation was performed by certified examiners including the Receptive and Expressive Language Subscales of the Bayley Scales of Infant Development-Third Edition and a standardized adjusted age feeding behaviors and nutritional intake. Data were analyzed using bivariate and multilevel linear and logistic regression modeling. Abnormal feeding behaviors were reported in 193 (13%) of these infants at 18-22 months adjusted age. Abnormal feeding patterns, days of mechanical ventilation, hearing impairment, and Gross Motor Functional Classification System level ≥ 2 each independently predicted lower composite language scores. At 18 months adjusted age, premature infants with a history of feeding difficulties are more likely to have language delay. Neuromotor impairment and days of mechanical ventilation are both important risk factors associated with these outcomes. Copyright © 2013 Mosby, Inc. All rights reserved.
Sensitivity of ocean oxygenation to variations in tropical zonal wind stress magnitude
NASA Astrophysics Data System (ADS)
Ridder, Nina N.; England, Matthew H.
2014-09-01
Ocean oxygenation has been observed to have changed over the past few decades and is projected to change further under global climate change due to an interplay of several mechanisms. In this study we isolate the effect of modified tropical surface wind stress conditions on the evolution of ocean oxygenation in a numerical climate model. We find that ocean oxygenation varies inversely with low-latitude surface wind stress. Approximately one third of this response is driven by sea surface temperature anomalies; the remaining two thirds result from changes in ocean circulation and marine biology. Global mean O2 concentration changes reach maximum values of +4 μM and -3.6 μM in the two most extreme perturbation cases of -30% and +30% wind change, respectively. Localized changes lie between +92 μM under 30% reduced winds and -56 μM for 30% increased winds. Overall, we find that the extent of the global low-oxygen volume varies with the same sign as the wind perturbation; namely, weaker winds reduce the low-oxygen volume on the global scale and vice versa for increased trade winds. We identify two regions, one in the Pacific Ocean off Chile and the other in the Indian Ocean off Somalia, that are of particular importance for the evolution of oxygen minimum zones in the global ocean.
Moura, Octávio; Simões, Mário R; Pereira, Marcelino
2014-02-01
This study analysed the usefulness of the Wechsler Intelligence Scale for Children-Third Edition in identifying specific cognitive impairments that are linked to developmental dyslexia (DD) and the diagnostic utility of the most common profiles in a sample of 100 Portuguese children (50 dyslexic and 50 normal readers) between the ages of 8 and 12 years. Children with DD exhibited significantly lower scores in the Verbal Comprehension Index (except the Vocabulary subtest), Freedom from Distractibility Index (FDI) and Processing Speed Index subtests, with larger effect sizes than normal readers in Information, Arithmetic and Digit Span. The Verbal-Performance IQs discrepancies, Bannatyne pattern and the presence of FDI; Arithmetic, Coding, Information and Digit Span subtests (ACID) and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profiles (full or partial) in the lowest subtests revealed a low diagnostic utility. However, the receiver operating characteristic curve and the optimal cut-off score analyses of the composite ACID; FDI and SCAD profiles scores showed moderate accuracy in correctly discriminating dyslexic readers from normal ones. These results suggested that in the context of a comprehensive assessment, the Wechsler Intelligence Scale for Children-Third Edition provides some useful information about the presence of specific cognitive disabilities in DD. Practitioner Points. Children with developmental dyslexia revealed significant deficits in the Wechsler Intelligence Scale for Children-Third Edition subtests that rely on verbal abilities, processing speed and working memory. The composite Arithmetic, Coding, Information and Digit Span subtests (ACID); Freedom from Distractibility Index and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profile scores showed moderate accuracy in correctly discriminating dyslexics from normal readers. Wechsler Intelligence Scale for Children-Third Edition may provide some useful information about the presence of specific cognitive disabilities in developmental dyslexia. Copyright © 2013 John Wiley & Sons, Ltd.
Development of a Decisional Balance Scale for Young Adult Marijuana Use
Elliott, Jennifer C.; Carey, Kate B.; Scott-Sheldon, Lori A. J.
2010-01-01
This study describes the development and validation of a decisional balance scale for marijuana use in young adults. Scale development was accomplished in four phases. First, 53 participants (70% female, 68% freshman) provided qualitative data that yielded content for an initial set of 47 items. In the second phase, an exploratory factor analysis on the responses of 260 participants (52% female, 68% freshman) revealed two factors, corresponding to pros and cons. Items that did not load well on the factors were omitted, resulting in a reduced set of 36 items. In the third phase, 182 participants (49% female, 37% freshmen) completed the revised scale and an evaluation of factor structure led to scale revisions and model respecification to create a good-fitting model. The final scales consisted of 8 pros (α = 0.91) and 16 cons (α = 0.93), and showed evidence of validity. In the fourth phase (N = 248, 66% female, 70% freshman), we confirmed the factor structure, and provided further evidence for reliability and validity. The Marijuana Decisional Balance Scale enhances our ability to study motivational factors associated with marijuana use among young adults. PMID:21261405
NASA Astrophysics Data System (ADS)
Young, Christopher Ke-shih
2004-11-01
We investigate the BT magnitude scales of the Second and Third Reference Catalogues of Bright Galaxies, finding both scales to be reasonably reliable for 11.5 <~Bt<~ 14.0. However, large-scale errors of 0.26 and 0.24mag per unit mag interval respectively are uncovered for early-type galaxies at the bright ends, whilst even larger ones of 0.74 and 0.36mag per unit mag interval are found for galaxies of all morphological types at the faint ends. We attribute this situation to several effects already discussed by Young et al. and Young (Paper I), including the application of relatively inflexible growth-curve models that are only in a few specific cases appropriate to the galaxies concerned. Of particular interest to this study though, we find that the apparent profile shapes of giant galaxies in the Virgo direction of cz < 15000 km s-1 tend to be less centrally concentrated the greater their distance. This demonstrates that even for relatively nearby galaxies, the distortion of the overall shapes of light profiles by resolution-degrading effects such as seeing and data smoothing, as originally predicted and modelled by Young & Currie and Young et al., is a significant effect. It is, therefore, not good practice simply to extrapolate the profiles of galaxies of identical intrinsic size and intrinsic profile shape (i.e. identical morphology) by means of the same growth-curve model, unless the galaxies are known a priori to be at the same distance and unless their photometry is of the same angular resolution. We also investigate the total-magnitude scale of the catalogue of photometric types of Prugniel & Héraudeau, finding it to be much more reliable than the BT one. However, we argue that photometric type is really a measure of apparent profile shape (i.e. intrinsic profile shape after scale reduction on account of distance followed by convolution with a seeing disc and often a smoothing function as well). Strictly, it should therefore only be applicable to comparisons between galaxies that are already known to be at similar distances provided that their photometry is also of similar angular resolution. Clearly, this must complicate attempts to construct quantitative morphological classification schemes for galaxies.
NASA Technical Reports Server (NTRS)
O'Kelly, Burke R.
1954-01-01
Free-flight tests in the transonic speed range utilizing rocketpropelled models have been made on three pairs of 0.11-scale North American F-100 airplane wings having an aspect ratio of 3.47, a taper ratio of 0.308, 45 degree sweepback at the quarter-chord line, and thickness ratios of 31 and 5 percent to investigate the possibility of flutte r. Data from tests of two other rocket-propelled models which accidentally fluttered during a drag investigation of the North American F-100 airplane are also presented. The first set of wings (5 percent thick) was tested on a model which was disturbed in pitch by a moving tail and reached a maximum Mach number of 0.85. The wings encountered mild oscillations near the first - bending frequency at high lift coefficients. The second set of wings 9 percent thick was tested up to a maximum Mach number of 0.95 at (2) angles of attack provided by small rocket motors installed in the nose of the model. No oscillations resembling flutter were encountered during the coasting flight between separation from the booster and sustainer firing (Mach numbers from 0.86 to 0.82) or during the sustainer firing at accelerations of about 8g up to the maximum Mach number of the test (0.95). The third set of wings was similar to the first set and was tested up to a maximum Mach number of 1.24. A mild flutter at frequencies near the first-bending frequency of the wings was encountered between a Mach number of 1.15 and a Mach number of 1.06 during both accelerating and coasting flight. The two drag models, which were 0.ll-scale models of the North American F-100 airplane configuration, reached a maximum Mach number of 1.77. The wings of these models had bending and torsional frequencies which were 40 and 89 percent, respectively, of the calculated scaled frequencies of the full-scale 7-percent-thick wing. Both models experienced flutter of the same type as that experienced-by the third set of wings.
NASA Astrophysics Data System (ADS)
Meissner, K. J.; Lippmann, T.; Sen Gupta, A.
2012-06-01
One-third of the world's coral reefs have disappeared over the last 30 years, and a further third is under threat today from various stress factors. The main global stress factors on coral reefs have been identified as changes in sea surface temperature (SST) and changes in surface seawater aragonite saturation (Ωarag). Here, we use a climate model of intermediate complexity, which includes an ocean general circulation model and a fully coupled carbon cycle, in conjunction with present-day observations of inter-annual SST variability to investigate three IPCC representative concentration pathways (RCP 3PD, RCP 4.5, and RCP 8.5), and their impact on the environmental stressors of coral reefs related to open ocean SST and open ocean Ωarag over the next 400 years. Our simulations show that for the RCP 4.5 and 8.5 scenarios, the threshold of 3.3 for zonal and annual mean Ωarag would be crossed in the first half of this century. By year 2030, 66-85% of the reef locations considered in this study would experience severe bleaching events at least once every 10 years. Regardless of the concentration pathway, virtually every reef considered in this study (>97%) would experience severe thermal stress by year 2050. In all our simulations, changes in surface seawater aragonite saturation lead changes in temperatures.
DR Tauri: Temporal variability of the brightness distribution in the potential planet-forming region
NASA Astrophysics Data System (ADS)
Brunngräber, R.; Wolf, S.; Ratzka, Th.; Ober, F.
2016-01-01
Aims: We investigate the variability of the brightness distribution and the changing density structure of the protoplanetary disk around DR Tau, a classical T Tauri star. DR Tau is known for its peculiar variations from the ultraviolet (UV) to the mid-infrared (MIR). Our goal is to constrain the temporal variation of the disk structure based on photometric and MIR interferometric data. Methods: We observed DR Tau with the MID-infrared Interferometric instrument (MIDI) at the Very Large Telescope Interferometer (VLTI) at three epochs separated by about nine years, two months, respectively. We fit the spectral energy distribution and the MIR visibilities with radiative transfer simulations. Results: We are able to reproduce the spectral energy distribution as well as the MIR visibility for one of the three epochs (third epoch) with a basic disk model. We were able to reproduce the very different visibility curve obtained nine years earlier with a very similar baseline (first epoch), using the same disk model with a smaller scale height. The same density distribution also reproduces the observation made with a higher spatial resolution in the second epoch, I.e. only two months before the third epoch. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, under the programs 074.C-0342(A) and 092.C-0726(A,B).
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
Axion as a cold dark matter candidate: analysis to third order perturbation for classical axion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noh, Hyerim; Hwang, Jai-chan; Park, Chan-Gyung, E-mail: hr@kasi.re.kr, E-mail: jchan@knu.ac.kr, E-mail: park.chan.gyung@gmail.com
2015-12-01
We investigate aspects of axion as a coherently oscillating massive classical scalar field by analyzing third order perturbations in Einstein's gravity in the axion-comoving gauge. The axion fluid has its characteristic pressure term leading to an axion Jeans scale which is cosmologically negligible for a canonical axion mass. Our classically derived axion pressure term in Einstein's gravity is identical to the one derived in the non-relativistic quantum mechanical context in the literature. We present the general relativistic continuity and Euler equations for an axion fluid valid up to third order perturbation. Equations for axion are exactly the same as thatmore » of a zero-pressure fluid in Einstein's gravity except for an axion pressure term in the Euler equation. Our analysis includes the cosmological constant.« less
Draft Model Curriculum in Nursing Education for Alcohol and Other Drug Abuse.
ERIC Educational Resources Information Center
Naegle, Madeline A.; Burns, Elizabeth M.
This document contains three model curricula in nursing education for alcohol and other drug abuse, one graduate and one baccalaureate level from New York University's (NYU) Division of Nursing, and the third combining graduate and undergraduate level curricula for Ohio State University (OSU). The NYU undergraduate curriculum contains a pilot test…
Visualization and Analysis of Light Pollution: a Case Study in Hong Kong
NASA Astrophysics Data System (ADS)
Wu, B.; Wong, H.
2012-07-01
The effects of light pollution problems in metropolitan areas are investigated in this study. Areas of Hong Kong are used as the source of three typical study cases. One case represents the regional scale, a second represents the district scale, and a third represents the street scale. Two light pollution parameters, Night Sky Brightness (NSB) and Street Light Level (SLL), are the focus of the analyses. Light pollution visualization approaches in relation to the different scales include various light pollution maps. They provide straightforward presentations of the light pollution situations in the study areas. The relationship between light pollution and several social-economic factors such as land use, household income, and types of outdoor lighting in the scale areas given, are examined. Results show that: (1) Land use may be one factor affecting light pollution in the regional scale; (2) A relatively strong correlation exists between light pollution and household income in the district scale; (3) The heaviest light pollution in the street scale is created by spotlights and also the different types of lighting from shops. The impact of the latter is in relation to the shop profile and size.
NASA Astrophysics Data System (ADS)
Davoine, X.; Bocquet, M.
2007-03-01
The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).
Al-Hamed, Faez Saleh; Tawfik, Mohamed Abdel-Monem; Abdelfadil, Ehab; Al-Saleh, Mohammed A Q
2017-06-01
To assess the effect of platelet-rich fibrin (PRF) on the healing process of the alveolar socket after surgical extraction of the mandibular third molars. PubMed, the Cochrane Central Register of Controlled Trials, Scopus, and relevant journals were searched using a combination of specific keywords ("platelet-rich fibrin," "oral surgery," and "third molar"). The final search was conducted on November 2, 2015. Randomized controlled clinical trials, as well as controlled clinical trials, aimed at comparing the effect of PRF versus natural healing after extraction of mandibular third molars were included. Five randomized controlled trials and one controlled clinical trial were included. There were 335 extractions (168 with PRF and 167 controls) in 183 participants. Considerable heterogeneity in study characteristics, outcome variables, and estimated scales was observed. Positive results were generally recorded for pain, trismus, swelling, periodontal pocket depth, soft tissue healing, and incidence of localized osteitis, but not in all studies. However, no meta-analysis could be conducted for such variables because of the different measurement scales used. The qualitative and meta-analysis results showed no significant improvement in bone healing with PRF-treated sockets compared with the naturally healing sockets. Within the limitations of the available evidence, PRF seems to have no beneficial role in bone healing after extraction of the mandibular third molars. Future standardized randomized controlled clinical trials are required to estimate the effect of PRF on socket regeneration. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
A model of partial differential equations for HIV propagation in lymph nodes
NASA Astrophysics Data System (ADS)
Marinho, E. B. S.; Bacelar, F. S.; Andrade, R. F. S.
2012-01-01
A system of partial differential equations is used to model the dissemination of the Human Immunodeficiency Virus (HIV) in CD4+T cells within lymph nodes. Besides diffusion terms, the model also includes a time-delay dependence to describe the time lag required by the immunologic system to provide defenses to new virus strains. The resulting dynamics strongly depends on the properties of the invariant sets of the model, consisting of three fixed points related to the time independent and spatial homogeneous tissue configurations in healthy and infected states. A region in the parameter space is considered, for which the time dependence of the space averaged model variables follows the clinical pattern reported for infected patients: a short scale primary infection, followed by a long latency period of almost complete recovery and third phase characterized by damped oscillations around a value with large HIV counting. Depending on the value of the diffusion coefficient, the latency time increases with respect to that one obtained for the space homogeneous version of the model. It is found that same initial conditions lead to quite different spatial patterns, which depend strongly on the latency interval.
Modelling the influence of total suspended solids on E. coli removal in river water.
Qian, Jueying; Walters, Evelyn; Rutschmann, Peter; Wagner, Michael; Horn, Harald
2016-01-01
Following sewer overflows, fecal indicator bacteria enter surface waters and may experience different lysis or growth processes. A 1D mathematical model was developed to predict total suspended solids (TSS) and Escherichia coli concentrations based on field measurements in a large-scale flume system simulating a combined sewer overflow. The removal mechanisms of natural inactivation, UV inactivation, and sedimentation were modelled. For the sedimentation process, one, two or three particle size classes were incorporated separately into the model. Moreover, the UV sensitivity coefficient α and natural inactivation coefficient kd were both formulated as functions of TSS concentration. It was observed that the E. coli removal was predicted more accurately by incorporating two particle size classes. However, addition of a third particle size class only improved the model slightly. When α and kd were allowed to vary with the TSS concentration, the model was able to predict E. coli fate and transport at different TSS concentrations accurately and flexibly. A sensitivity analysis revealed that the mechanisms of UV and natural inactivation were more influential at low TSS concentrations, whereas the sedimentation process became more important at elevated TSS concentrations.
Context-Based Urban Terrain Reconstruction from Uav-Videos for Geoinformation Applications
NASA Astrophysics Data System (ADS)
Bulatov, D.; Solbrig, P.; Gross, H.; Wernerus, P.; Repasi, E.; Heipke, C.
2011-09-01
Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized) unmanned aerial vehicles, (M)UAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure - orientation, dense reconstruction, urban terrain modeling and geo-referencing - are robust, straight-forward, and nearly fully-automatic. The two last steps - namely, urban terrain modeling from almost-nadir videos and co-registration of models 6ndash; represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM) extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasi- intrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work.
Innovative mathematical modeling in environmental remediation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Gour T.; National Central Univ.; Univ. of Central Florida
2013-05-01
There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out aremore » used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co).The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation.The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium.« less
The Development of Scientific Thinking in Elementary School: A Comprehensive Inventory
ERIC Educational Resources Information Center
Koerber, Susanne; Mayer, Daniela; Osterhaus, Christopher; Schwippert, Knut; Sodian, Beate
2015-01-01
The development of scientific thinking was assessed in 1,581 second, third, and fourth graders (8-, 9-, 10-year-olds) based on a conceptual model that posits developmental progression from naïve to more advanced conceptions. Using a 66-item scale, five components of scientific thinking were addressed, including experimental design, data…
The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation Inte...
Finite Element Simulation Methods for Dry Sliding Wear
2008-03-27
effects of wear only occur on a microscopic level (3; 14; 17). A third reason that wear is not well understood is that it involves many different...material or one with a higher coefficient of friction there will be more of a problem with high pressure points. A third possibility is to spread the...For the local model the rail is modeled as a deformable body , and a small, 1 mm, square is taken from the slipper as the submodel. 5.2 The Global
Getting to the Heart of Performance.
ERIC Educational Resources Information Center
Stock, Byron
1996-01-01
Human performance technology (HPT) models are compared. One model groups performance factors by their relation to the performer (internal or external). A second model categorizes factors by which organizational level has the most control over them (executive, managerial, or individual). A third model considers rational and emotional intelligences;…
Developing COMET-Farm and the DayCent Model for California Specialty Crops
NASA Astrophysics Data System (ADS)
Steenwerth, K. L.; Barker, X. Z.; Carlson, M.; Killian, K.; Easter, M.; Swan, A.; Thompson, L.; Williams, S.; Paustian, K.
2016-12-01
Specialty crops are hugely important to the agricultural economy of California, which grows over 400 specialty crops and produces at least a third of the nations' vegetables and more than two thirds of its fruit and nut tree crops. Since the passage of AB32 Global Warming Solutions Act in 2006, the state has made strong investments in reducing greenhouse gas emissions and developing climate adaptation solutions. Most recently, Governor J. Brown (CA) has issued an executive order to establish reductions to 40% below 1990 levels. While agriculture in California is not regulated for greenhouse gas emissions under AB32, efforts are being made to develop tools to support practices that can enhance soil health and reduce greenhouse gas emissions. USDA-NRCS supports one such tool known as COMET-Farm, which is intended for future use with incentive programs and soil conservation plans managed by the agency. The underlying model that that simulates entity-scale greenhouse gas emissions in COMET-Farm is DayCent. Members of the California Climate Hub are collaborating with the Natural Resource Ecology Laboratory at Colorado State University in Fort Collins, CO to develop DayCent for 15 California specialty crops. These specialty crops include woody perennials like stone fruit like almonds and peaches, walnuts, citrus, wine grapes, raisins and table grapes. Annual specialty crops include cool season vegetables like lettuce and broccoli, tomatoes, and strawberries. DayCent has been parameterized for these crops using existing published and unpublished studies. Practice based information has also been gathered in consultation with growers. Aspects of the model have been developed for woody biomass production and competition between herbaceous vegetation and woody perennial crops. We will report on model performance for these crops and opportunities for model improvement.
Farmer, William H.; Over, Thomas M.; Vogel, Richard M.
2015-01-01
Understanding the spatial structure of daily streamflow is essential for managing freshwater resources, especially in poorly-gaged regions. Spatial scaling assumptions are common in flood frequency prediction (e.g., index-flood method) and the prediction of continuous streamflow at ungaged sites (e.g. drainage-area ratio), with simple scaling by drainage area being the most common assumption. In this study, scaling analyses of daily streamflow from 173 streamgages in the southeastern US resulted in three important findings. First, the use of only positive integer moment orders, as has been done in most previous studies, captures only the probabilistic and spatial scaling behavior of flows above an exceedance probability near the median; negative moment orders (inverse moments) are needed for lower streamflows. Second, assessing scaling by using drainage area alone is shown to result in a high degree of omitted-variable bias, masking the true spatial scaling behavior. Multiple regression is shown to mitigate this bias, controlling for regional heterogeneity of basin attributes, especially those correlated with drainage area. Previous univariate scaling analyses have neglected the scaling of low-flow events and may have produced biased estimates of the spatial scaling exponent. Third, the multiple regression results show that mean flows scale with an exponent of one, low flows scale with spatial scaling exponents greater than one, and high flows scale with exponents less than one. The relationship between scaling exponents and exceedance probabilities may be a fundamental signature of regional streamflow. This signature may improve our understanding of the physical processes generating streamflow at different exceedance probabilities.
Characteristics of HIV Care and Treatment in PEPFAR-Supported Sites
Filler, Scott; Berruti, Andres A.; Menzies, Nick; Berzon, Rick; Ellerbrock, Tedd V.; Ferris, Robert; Blandford, John M.
2011-01-01
Background The U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) has supported the extension of HIV care and treatment to 2.4 million individuals by September 2009. With increasing resources targeted toward scale-up, it is important to understand the characteristics of current PEPFAR-supported HIV care and treatment sites. Methods Forty-five sites in Botswana, Ethiopia, Nigeria, Uganda, and Vietnam were sampled. Data were collected retrospectively from successive 6-month periods of site operations, through reviews of facility records and interviews with site personnel between April 2006 and March 2007. Facility size and scale-up rate, patient characteristics, staffing models, clinical and laboratory monitoring, and intervention mix were compared. Results Sites added a median of 293 patients per quarter. By the evaluation’s end, sites supported a median of 1,649 HIV patients, 922 of them receiving antiretroviral therapy (ART). Patients were predominantly adult (97.4%) and the majority (96.5%) were receiving regimens based on nonnucleoside reverse transcriptase inhibitors (NNRTIs). The ratios of physicians to patients dropped substantially as sites matured. ART patients were commonly seen monthly or quarterly for clinical and laboratory monitoring, with CD4 counts being taken at 6-month intervals. One-third of sites provided viral load testing. Cotrimoxazole prophylaxis was the most prevalent supportive service. Conclusions HIV treatment sites scaled up rapidly with the influx of resources and technical support through PEPFAR, providing complex health services to progressively expanding patient cohorts. Human resources are stretched thin, and delivery models and intervention mix differ widely between sites. Ongoing research is needed to identify best-practice service delivery models. PMID:21346585
Depressive symptoms in elderly participants of an open university for elderly
Batistoni, Samila Sathler Tavares; Ordonez, Tiago Nascimento; da Silva, Thaís Bento Lima; do Nascimento, Priscila Pascarelli Pedrico; Kissaki, Priscilla Tiemi; Cachioni, Meire
2011-01-01
Although the prevalence of depressive disorders among the elderly is lower than among the younger population, the presence of significant symptoms of depression is common in this group. Studies report that participation in social, educational and leisure activities is related to fewer depressive symptoms in this population. Objective The aim of this study was to examine the prevalence of depression among elderly participants of an Open University for the Third Age, in terms of the time studying. Methods The study had a cross-sectional design and the participation of 95.2% (n=184) of total enrollers in the first half of 2010 on the activities of the Third Age Open University’s School of Arts, Sciences and Humanities of the University of São Paulo. All participants answered a socio-demographic questionnaire and the Geriatric Depression Scale (GDS-15). Results An association between studying time of over one semester at the University of the Third Age and a lower rate of depressive symptoms, was observed. Conclusion Study time of over one semester was associated with less depressive symptoms, acting as a possible protective factor against depression. PMID:29213728
Dimensions of assertiveness: factor analysis of five assertion inventories.
Henderson, M; Furnham, A
1983-09-01
Five self report assertiveness inventories were factor analyzed. In each case two major factors emerged, accounting for approximately one-quarter to a third of the variance. The findings emphasize the multidimensional nature of current measures of assertiveness, and suggest the construction of a more systematic and psychometrically evaluated scale that would yield subscale scores assessing the separate dimensions of assertiveness.
Relationship among Workplace Spirituality, Meaning in Life, and Psychological Well-Being of Teachers
ERIC Educational Resources Information Center
Liang, Jin-long; Peng, Lan-xiang; Zhao, Si-jie; Wu, Ho-tang
2017-01-01
This study set out to analyze the relationship among teachers' workplace spirituality, sense of meaning in life, and psychological well-being. Taking 610 teachers as its subjects, the study employed three scales: one to measure the subjects' sense of workplace spirituality, another to measure their sense of meaning in life, and a third to measure…
Teach For America Teachers: How Long Do They Teach? Why Do They Leave?
ERIC Educational Resources Information Center
Donaldson, Morgaen L.; Johnson, Susan Moore
2011-01-01
A large-scale, nationwide analysis of Teach For America teacher turnover presents a deeper picture of which TFAers stay, which ones leave the profession and some suggestions about why they leave. The authors learned that nearly two-thirds (60.5%) of TFA teachers continue as public school teachers beyond their two-year commitment; more than half…
Chopra, Sanjeev; Kataria, Rashim; Sinha, Virendra Deo
2017-01-01
Background Spinal instrumentation using rods and screws have become procedure of choice for posterior fixation. Vertebral artery anatomy is highly variable in this region posing challenges during surgery. Our study used 3D printer model to understand the anatomy and variations in vertebral artery in live patients thereby providing an accurate idea about vertebral artery injury risk in these patients preoperatively and to rehearse the whole procedure. Methods Ten patients of developmental craniovertebral junction (CVJ) anomalies who were planned for operative intervention in the Department of Neurosurgery at SMS Hospital from February 2016 to December 2016 were analysed using a 3D printer model. Results Out of twenty vertebral arteries studied in ten patients, two were hypoplastic and out of these one could not be appreciated on 3D printer model. Out of remaining nineteen, thirteen arteries were found to lie outside the joint, three were in lateral third, one traversed the middle third of joint and one lied in medial third. In one patient, the vertebral artery was stretched and it traversed horizontally over the joint. Out of ten patients studied, nine were having occipitalised atlas and so entry of these vertebral arteries into cranium were classified as given by Wang et al. into four types. Conclusions By our study, 3D printer model was extremely helpful in analyzing joints and vertebral artery preoperatively and making the surgeon acquainted about the placement and trajectory of the screws accordingly. In our opinion, these models should be included as a basic investigation tool in these patients. PMID:29354734
Evaluation and error apportionment of an ensemble of ...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact
Host Immunity via Mutable Virtualized Large-Scale Network Containers
2016-07-25
and constrain the distributed persistent inside crawlers that have va.lid credentials to access the web services. The main idea is to add a marker...to each web page URL and use the URL path and user inforn1ation contained in the marker to help accurately detect crawlers at its earliest stage...more than half of all website traffic, and malicious bots contributes almost one third of the traffic. As one type of bots, web crawlers have been
A road map for natural capitalism.
Lovins, A B; Lovins, L H; Hawken, P
1999-01-01
No one would run a business without accounting for its capital outlays. Yet most companies overlook one major capital component--the value of the earth's ecosystem services. It is a staggering omission; recent calculations place the value of the earth's total ecosystem services--water storage, atmosphere regulation, climate control, and so on--at $33 trillion a year. Not accounting for those costs has led to waste on a grand scale. But now a few farsighted companies are finding powerful business opportunities in conserving resources on a similarly grand scale. They are embarking on a journey toward "natural capitalism," a journey that comprises four major shifts in business practices. The first stage involves dramatically increasing the productivity of natural resources, stretching them as much as 100 times further than they do today. In the second stage, companies adopt closed-loop production systems that yield no waste or toxicity. The third stage requires a fundamental change of business model--from one of selling products to one of delivering services. For example, a manufacturer would sell lighting services rather than lightbulbs, thus benefitting the seller and customer for developing extremely efficient, durable lightbulbs. The last stage involves reinvesting in natural capital to restore, sustain, and expand the planet's ecosystem. Because natural capitalism is both necessary and profitable, it will sub-sume traditional industrialism, the authors argue, just as industrialism sub-sumed agrarianism. And the companies that are furthest down the road will have the competitive edge.
Rapid start-up of one-stage deammonification MBBR without addition of external inoculum.
Kanders, Linda; Ling, Daniel; Nehrenheim, Emma
2016-12-01
In recent years, the anammox process has emerged as a useful method for robust and efficient nitrogen removal in wastewater treatment plants (WWTPs). This paper evaluates a one-stage deammonification (nitritation and anammox) start-up using carrier material without using anammox inoculum. A continuous laboratory-scale process was followed by full-scale operation with reject water from the digesters at Bekkelaget WWTP in Oslo, Norway. A third laboratory reactor was run in operational mode to verify the suitability of reject water from thermophilic digestion for the deammonification process. The two start-ups presented were run with indigenous bacterial populations, intermittent aeration and dilution, to favour growth of the anammox bacterial branches. Evaluation was done by chemical and fluorescence in situ hybridization analyses. The results demonstrate that anammox culture can be set up in a one-stage process only using indigenous anammox bacteria and that a full-scale start-up process can be completed in less than 120 days.
Five regional scale models with a horizontal domain covering the European continent and its surrounding seas, one hemispheric and one global scale model participated in an atmospheric mercury modelling intercomparison study. Model-predicted concentrations in ambient air were comp...
Uyar, Asli; Bener, Ayse; Ciray, H Nadir
2015-08-01
Multiple embryo transfers in in vitro fertilization (IVF) treatment increase the number of successful pregnancies while elevating the risk of multiple gestations. IVF-associated multiple pregnancies exhibit significant financial, social, and medical implications. Clinicians need to decide the number of embryos to be transferred considering the tradeoff between successful outcomes and multiple pregnancies. To predict implantation outcome of individual embryos in an IVF cycle with the aim of providing decision support on the number of embryos transferred. Retrospective cohort study. Electronic health records of one of the largest IVF clinics in Turkey. The study data set included 2453 embryos transferred at day 2 or day 3 after intracytoplasmic sperm injection (ICSI). Each embryo was represented with 18 clinical features and a class label, +1 or -1, indicating positive and negative implantation outcomes, respectively. For each classifier tested, a model was developed using two-thirds of the data set, and prediction performance was evaluated on the remaining one-third of the samples using receiver operating characteristic (ROC) analysis. The training-testing procedure was repeated 10 times on randomly split (two-thirds to one-third) data. The relative predictive values of clinical input characteristics were assessed using information gain feature weighting and forward feature selection methods. The naïve Bayes model provided 80.4% accuracy, 63.7% sensitivity, and 17.6% false alarm rate in embryo-based implantation prediction. Multiple embryo implantations were predicted at a 63.8% sensitivity level. Predictions using the proposed model resulted in higher accuracy compared with expert judgment alone (on average, 75.7% and 60.1%, respectively). A machine learning-based decision support system would be useful in improving the success rates of IVF treatment. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Cheng, Anning; Xu, Kuan-Man
2006-01-01
The abilities of cloud-resolving models (CRMs) with the double-Gaussian based and the single-Gaussian based third-order closures (TOCs) to simulate the shallow cumuli and their transition to deep convective clouds are compared in this study. The single-Gaussian based TOC is fully prognostic (FP), while the double-Gaussian based TOC is partially prognostic (PP). The latter only predicts three important third-order moments while the former predicts all the thirdorder moments. A shallow cumulus case is simulated by single-column versions of the FP and PP TOC models. The PP TOC improves the simulation of shallow cumulus greatly over the FP TOC by producing more realistic cloud structures. Large differences between the FP and PP TOC simulations appear in the cloud layer of the second- and third-order moments, which are related mainly to the underestimate of the cloud height in the FP TOC simulation. Sensitivity experiments and analysis of probability density functions (PDFs) used in the TOCs show that both the turbulence-scale condensation and higher-order moments are important to realistic simulations of the boundary-layer shallow cumuli. A shallow to deep convective cloud transition case is also simulated by the 2-D versions of the FP and PP TOC models. Both CRMs can capture the transition from the shallow cumuli to deep convective clouds. The PP simulations produce more and deeper shallow cumuli than the FP simulations, but the FP simulations produce larger and wider convective clouds than the PP simulations. The temporal evolutions of cloud and precipitation are closely related to the turbulent transport, the cold pool and the cloud-scale circulation. The large amount of turbulent mixing associated with the shallow cumuli slows down the increase of the convective available potential energy and inhibits the early transition to deep convective clouds in the PP simulation. When the deep convective clouds fully develop and the precipitation is produced, the cold pools produced by the evaporation of the precipitation are not favorable to the formation of shallow cumuli.
Third generation sfermion decays into Z and W gauge bosons: Full one-loop analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arhrib, Abdesslam; LPHEA, Departement de Physique, Faculte des Sciences-Semlalia, B.P. 2390 Marrakech; Benbrik, Rachid
2005-05-01
The complete one-loop radiative corrections to third-generation scalar fermions into gauge bosons Z and W{sup {+-}} is considered. We focus on f-tilde{sub 2}{yields}Zf-tilde{sub 1} and f-tilde{sub i}{yields}W{sup {+-}}f-tilde{sub j}{sup '}, f,f{sup '}=t,b. We include SUSY-QCD, QED, and full electroweak corrections. It is found that the electroweak corrections can be of the same order as the SUSY-QCD corrections. The two sets of corrections interfere destructively in some region of parameter space. The full one-loop correction can reach 10% in some supergravity scenario, while in model independent analysis like general the minimal supersymmetric standard model, the one-loop correction can reach 20% formore » large tan{beta} and large trilinear soft breaking terms A{sub b}.« less
Large-scale Activities Associated with the 2005 Sep. 7th Event
NASA Astrophysics Data System (ADS)
Zong, Weiguo
We present a multi-wavelength study on large-scale activities associated with a significant solar event. On 2005 September 7, a flare classified as bigger than X17 was observed. Combining with Hα 6562.8 ˚, He I 10830 ˚and soft X-ray observations, three large-scale activities were A A found to propagate over a long distance on the solar surface. 1) The first large-scale activity emanated from the flare site, which propagated westward around the solar equator and appeared as sequential brightenings. With MDI longitudinal magnetic field map, the activity was found to propagate along the magnetic network. 2) The second large-scale activity could be well identified both in He I 10830 ˚images and soft X-ray images and appeared as diffuse emission A enhancement propagating away. The activity started later than the first one and was not centric on the flare site. Moreover, a rotation was found along with the bright front propagating away. 3) The third activity was ahead of the second one, which was identified as a "winking" filament. The three activities have different origins, which were seldom observed in one event. Therefore this study is useful to understand the mechanism of large-scale activities on solar surface.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Kalagov, Georgii
2018-05-01
Critical properties of the two-dimensional X Y model involving solely nematic-like terms of the second and third orders are investigated by spin-wave analysis and Monte Carlo simulation. It is found that, even though neither of the nematic-like terms alone can induce magnetic ordering, their coexistence and competition leads to an extended phase of the magnetic quasi-long-range-order phase, wedged between the two nematic-like phases induced by the respective couplings. Thus, except for the multicritical point, at which all the phases meet, for any finite value of the coupling parameters ratio there are two phase transition: one from the paramagnetic phase to one of the two nematic-like phases followed by another one at lower temperatures to the magnetic phase. The finite-size scaling analysis indicates that the phase transitions between the magnetic and nematic-like phases belong to the Ising and three-state Potts universality classes. Inside the competition-induced algebraic magnetic phase, the spin-pair correlation function is found to decay even much more slowly than in the standard X Y model with purely magnetic interactions. Such a magnetic phase is characterized by an extremely low vortex-antivortex pair density attaining a minimum close to the point at which the two couplings are of about equal strength.
NASA Astrophysics Data System (ADS)
Hohenstein, Edward G.; Parrish, Robert M.; Martínez, Todd J.
2012-07-01
Many approximations have been developed to help deal with the O(N4) growth of the electron repulsion integral (ERI) tensor, where N is the number of one-electron basis functions used to represent the electronic wavefunction. Of these, the density fitting (DF) approximation is currently the most widely used despite the fact that it is often incapable of altering the underlying scaling of computational effort with respect to molecular size. We present a method for exploiting sparsity in three-center overlap integrals through tensor decomposition to obtain a low-rank approximation to density fitting (tensor hypercontraction density fitting or THC-DF). This new approximation reduces the 4th-order ERI tensor to a product of five matrices, simultaneously reducing the storage requirement as well as increasing the flexibility to regroup terms and reduce scaling behavior. As an example, we demonstrate such a scaling reduction for second- and third-order perturbation theory (MP2 and MP3), showing that both can be carried out in O(N4) operations. This should be compared to the usual scaling behavior of O(N5) and O(N6) for MP2 and MP3, respectively. The THC-DF technique can also be applied to other methods in electronic structure theory, such as coupled-cluster and configuration interaction, promising significant gains in computational efficiency and storage reduction.
ERIC Educational Resources Information Center
Mayes, Susan Dickerson; Calhoun, Susan L.
2007-01-01
IQ and achievement scores were analyzed for 678 children with attention-deficit/hyperactivity disorder (ADHD; 6-16 years of age, IQ=80) administered the Wechsler Intelligence Scale for Children-Third Edition (WISC-III; n=586) and Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV, n=92). Approximately 76% of children in both samples…
Dynamics and Collapse in a Power System Model with Voltage Variation: The Damping Effect.
Ma, Jinpeng; Sun, Yong; Yuan, Xiaoming; Kurths, Jürgen; Zhan, Meng
2016-01-01
Complex nonlinear phenomena are investigated in a basic power system model of the single-machine-infinite-bus (SMIB) with a synchronous generator modeled by a classical third-order differential equation including both angle dynamics and voltage dynamics, the so-called flux decay equation. In contrast, for the second-order differential equation considering the angle dynamics only, it is the classical swing equation. Similarities and differences of the dynamics generated between the third-order model and the second-order one are studied. We mainly find that, for positive damping, these two models show quite similar behavior, namely, stable fixed point, stable limit cycle, and their coexistence for different parameters. However, for negative damping, the second-order system can only collapse, whereas for the third-order model, more complicated behavior may happen, such as stable fixed point, limit cycle, quasi-periodicity, and chaos. Interesting partial collapse phenomena for angle instability only and not for voltage instability are also found here, including collapse from quasi-periodicity and from chaos etc. These findings not only provide a basic physical picture for power system dynamics in the third-order model incorporating voltage dynamics, but also enable us a deeper understanding of the complex dynamical behavior and even leading to a design of oscillation damping in electric power systems.
NASA Astrophysics Data System (ADS)
Altmoos, Michael; Henle, Klaus
2010-11-01
Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.
GLEAM version 3: Global Land Evaporation Datasets and Model
NASA Astrophysics Data System (ADS)
Martens, B.; Miralles, D. G.; Lievens, H.; van der Schalie, R.; de Jeu, R.; Fernandez-Prieto, D.; Verhoest, N.
2015-12-01
Terrestrial evaporation links energy, water and carbon cycles over land and is therefore a key variable of the climate system. However, the global-scale magnitude and variability of the flux, and the sensitivity of the underlying physical process to changes in environmental factors, are still poorly understood due to limitations in in situ measurements. As a result, several methods have risen to estimate global patterns of land evaporation from satellite observations. However, these algorithms generally differ in their approach to model evaporation, resulting in large differences in their estimates. One of these methods is GLEAM, the Global Land Evaporation: the Amsterdam Methodology. GLEAM estimates terrestrial evaporation based on daily satellite observations of meteorological variables, vegetation characteristics and soil moisture. Since the publication of the first version of the algorithm (2011), the model has been widely applied to analyse trends in the water cycle and land-atmospheric feedbacks during extreme hydrometeorological events. A third version of the GLEAM global datasets is foreseen by the end of 2015. Given the relevance of having a continuous and reliable record of global-scale evaporation estimates for climate and hydrological research, the establishment of an online data portal to host these data to the public is also foreseen. In this new release of the GLEAM datasets, different components of the model have been updated, with the most significant change being the revision of the data assimilation algorithm. In this presentation, we will highlight the most important changes of the methodology and present three new GLEAM datasets and their validation against in situ observations and an alternative dataset of terrestrial evaporation (ERA-Land). Results of the validation exercise indicate that the magnitude and the spatiotemporal variability of the modelled evaporation agree reasonably well with the estimates of ERA-Land and the in situ observations. It is also shown that the performance of the revised model is higher compared to the original one.
A Harder Rain is Going to Fall: Challenges for Actionable Projections of Extremes
NASA Astrophysics Data System (ADS)
Collins, W.
2014-12-01
Hydrometeorological extremes are projected to increase in both severity and frequency as the Earth's surface continues to warm in response to anthropogenic emissions of greenhouse gases. These extremes will directly affect the availability and reliability of water and other critical resources. The most comprehensive suite of multi-model projections has been assembled under the Coupled Model Intercomparison Project version 5 (CMIP5) and assessed in the Fifth Assessment (AR5) of the Intergovernmental Panel on Climate Change (IPCC). In order for these projections to be actionable, the projections should exhibit consistency and fidelity down to the local length and timescales required for operational resource planning, for example the scales relevant for water allocations from a major watershed. In this presentation, we summarize the length and timescales relevant for resource planning and then use downscaled versions of the IPCC simulations over the contiguous United States to address three questions. First, over what range of scales is there quantitative agreement between the simulated historical extremes and in situ measurements? Second, does this range of scales in the historical and future simulations overlap with the scales relevant for resource management and adaptation? Third, does downscaling enhance the degree of multi-model consistency at scales smaller than the typical global model resolution? We conclude by using these results to highlight requirements for further model development to make the next generation of models more useful for planning purposes.
Boredom proneness in pathological gambling.
Blaszczynski, A; McConaghy, N; Frankova, A
1990-08-01
To test the hypothesis that pathological gamblers seek stimulation as a means of reducing aversive under-aroused states of boredom and/or depression, the Beck Depression Inventory, Zuckerman's Sensation Seeking Scale and a Boredom Proneness Scale were administered to 48 diagnosed pathological gamblers and a control group of 40 family physician patients. Analyses of variance showed pathological gamblers obtained significantly higher boredom proneness and depression scores than those of controls. That the Boredom Proneness Scale failed to correlate with the Zuckerman Boredom Susceptibility subscale suggested the two measure differing dimensions. Results indicated the possible existence of three subtypes of pathological gamblers, one group characterized by boredom, another by depression, and a third by a mixture of both depression and boredom.
Disease state fingerprint for fall risk assessment.
Similä, Heidi; Immonen, Milla
2014-01-01
Fall prevention is an important and complex multifactorial challenge, since one third of people over 65 years old fall at least once every year. A novel application of Disease State Fingerprint (DSF) algorithm is presented for holistic visualization of fall risk factors and identifying persons with falls history or decreased level of physical functioning based on fall risk assessment data. The algorithm is tested with data from 42 older adults, that went through a comprehensive fall risk assessment. Within the study population the Activities-specific Balance Confidence (ABC) scale score, Berg Balance Scale (BBS) score and the number of drugs in use were the three most relevant variables, that differed between the fallers and non-fallers. This study showed that the DSF visualization is beneficial in inspection of an individual's significant fall risk factors, since people have problems in different areas and one single assessment scale is not enough to expose all the people at risk.
ERIC Educational Resources Information Center
Rivera, Natasha F.
The federally funded Model Development Program of Bilingual Education served 385 students at one elementary and one middle school in Manhattan (New York) in 1992-93, its third year of operation. Participants included 168 native Spanish-speaking, limited-English-proficient (LEP) students and 217 English-proficient (EP) students, both…
Using a Student-Manipulated Model to Enhance Student Learning in a Large Lecture Class
ERIC Educational Resources Information Center
Gray, Kyle; Steer, David; McConnell, David; Owens, Katharine
2010-01-01
Despite years of formal education, approximately one-third of all undergraduate students still cannot explain the causes of the seasons. Student manipulation of a handheld model is one approach to teaching this concept; however, the large number of students in many introductory classes can dissuade instructors from utilizing this teaching…
Introductory Biology Students’ Conceptual Models and Explanations of the Origin of Variation
Shaw, Neil; Momsen, Jennifer; Reinagel, Adam; Le, Paul; Taqieddin, Ranya; Long, Tammy
2014-01-01
Mutation is the key molecular mechanism generating phenotypic variation, which is the basis for evolution. In an introductory biology course, we used a model-based pedagogy that enabled students to integrate their understanding of genetics and evolution within multiple case studies. We used student-generated conceptual models to assess understanding of the origin of variation. By midterm, only a small percentage of students articulated complete and accurate representations of the origin of variation in their models. Targeted feedback was offered through activities requiring students to critically evaluate peers’ models. At semester's end, a substantial proportion of students significantly improved their representation of how variation arises (though one-third still did not include mutation in their models). Students’ written explanations of the origin of variation were mostly consistent with their models, although less effective than models in conveying mechanistic reasoning. This study contributes evidence that articulating the genetic origin of variation is particularly challenging for learners and may require multiple cycles of instruction, assessment, and feedback. To support meaningful learning of the origin of variation, we advocate instruction that explicitly integrates multiple scales of biological organization, assessment that promotes and reveals mechanistic and causal reasoning, and practice with explanatory models with formative feedback. PMID:25185235
Spatial scaling patterns and functional redundancies in a changing boreal lake landscape
Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.
2015-01-01
Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.
Assessing the knowledge to practice gap: The asthma practices of community pharmacists
Guirguis, Lisa M.
2017-01-01
Background: Community pharmacists are well positioned to identify patients with poorly controlled asthma and trained to optimize asthma therapy. Yet, over 90% of patients with asthma live with uncontrolled disease. We sought to understand the current state of asthma management in practice in Alberta and explore the potential use of the Chat, Check and Chart (CCC) model to enhance pharmacists’ care for patients with asthma. Methods: An 18-question survey was used to examine pharmacists’ monitoring of asthma control and prior use of the CCC tools. Descriptive statistics were used to characterize the response rate, sample demographics, asthma management and CCC use. Survey validity and reliability were established. Results: One hundred randomly selected pharmacists completed the online survey with a 40% (100/250) response rate. A third of responding pharmacists reported talking to most patients about asthma symptoms and medication, with a greater focus on talking with patients on new prescriptions over those with ongoing therapies. Fewer than 1 in 10 pharmacists routinely talked to most patients about asthma action plans (AAPs). The majority of pharmacists (76%) were familiar with the CCC model, and 83% of those reported that the CCC model influenced their practice anywhere from somewhat (45%) to a great deal (38%). Both scales had good reliability, and factor analysis provided support for scale validity. Conclusions: There was considerable variability in pharmacists’ activities in monitoring asthma. Pharmacists rarely used AAPs. The CCC model had a high level of self-reported familiarity, use and influence among pharmacists. PMID:29317938
Functional reasoning in diagnostic problem solving
NASA Technical Reports Server (NTRS)
Sticklen, Jon; Bond, W. E.; Stclair, D. C.
1988-01-01
This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.
The third law of thermodynamics and the fractional entropies
NASA Astrophysics Data System (ADS)
Baris Bagci, G.
2016-08-01
We consider the fractal calculus based Ubriaco and Machado entropies and investigate whether they conform to the third law of thermodynamics. The Ubriaco entropy satisfies the third law of thermodynamics in the interval 0 < q ≤ 1 exactly where it is also thermodynamically stable. The Machado entropy, on the other hand, yields diverging inverse temperature in the region 0 < q ≤ 1, albeit with non-vanishing negative entropy values. Therefore, despite the divergent inverse temperature behavior, the Machado entropy fails the third law of thermodynamics. We also show that the aforementioned results are also supported by the one-dimensional Ising model with no external field.
Properties of the Flight Model Gas Electron Multiplier for the GEMS Mission
NASA Technical Reports Server (NTRS)
Takeuchi, Yoko; Kitaguchi, Takao; Hayato, Asami; Tamagawa, Toru; Iwakiri, Wataru; Asami, Fumi; Yoshikawa, Akifumi; Kaneko, Kenta; Enoto, Teruaki; Black, Kevin;
2014-01-01
We present the gain properties of the gas electron multiplier (GEM) foil in pure dimethyl ether (DME) at 190 Torr. The GEM is one of the micro pattern gas detectors and it is adopted as a key part of the X-ray polarimeter for the GEMS mission. The X-ray polarimeter is a time projection chamber operating in pure DME gas at 190 Torr. We describe experimental results of (1) the maximum gain the GEM can achieve without any discharges, (2) the linearity of the energy scale for the GEM operation, and (3) the two-dimensional gain variation of the active area. First, our experiment with 6.4 keV X-ray irradiation of the whole GEM area demonstrates that the maximum effective gain is 2 x 10(exp 4) with the applied voltage of 580 V. Second, the measured energy scale is linear among three energies of 4.5, 6.4, and 8.0 keV. Third, the two-dimensional gain mapping test derives the standard deviation of the gain variability of 7% across the active area.
Extragalactic Sources and Propagation of UHECRs
NASA Astrophysics Data System (ADS)
van Vliet, Arjen; Alves Batista, Rafael; Sigl, Günter
With the publicly available astrophysical simulation framework for propagating extraterrestrial UHE particles, CRPropa 3, it is now possible to study realistic UHECR source scenarios including deflections in Galactic and extragalactic magnetic fields in an efficient way. Here we discuss three recent studies that have already been done in that direction. The first one investigates what can be expected in the case of maximum allowed intergalactic magnetic fields. Here is shown that, even if voids contain strong magnetic fields, deflections of protons with energies ≳ 60 EeV from nearby sources might be small enough to allow for UHECR astronomy. The second study looks into several scenarios with a smaller magnetization focusing on large-scale anisotropies. Here is shown that the local source distribution can have a more significant effect on the large-scale anisotropy than the EGMF model. A significant dipole component could, for instance, be explained by a dominant source within 5 Mpc distance. The third study looks into whether UHECRs can come from local radio galaxies. If this is the case it is difficult to reproduce the observed low level of anisotropy. Therefore is concluded that the magnetic field strength in voids in the EGMF model used here is too low and/or there are additional sources of UHECRs that were not taken into account in these simulations.
Egeland, Jens
2015-01-01
The Wechsler Adult Intelligence Scale (WAIS) is one of the most frequently used tests among psychologists. In the fourth edition of the test (WAIS-IV), the subtests Digit Span and Letter-Number Sequencing are expanded for better measurement of working memory (WM). However, it is not clear whether the new extended tasks contribute sufficient complexity to be sensitive measures of manipulation WM, nor do we know to what degree WM capacity differs between the visual and the auditory modality because the WAIS-IV only tests the auditory modality. Performance by a mixed sample of 226 patients referred for neuropsychological examination on the Digit Span and Letter-Number Sequencing subtests from the WAIS-IV and on Spatial Span from the Wechsler Memory Scale-Third Edition was analyzed in two confirmatory factor analyses to investigate whether a unitary WM model or divisions based on modality or level/complexity best fit the data. The modality model showed the best fit when analyzing summed scores for each task as well as scores for the longest span. The clinician is advised to apply tests with higher manipulation load and to consider testing visual span as well before drawing conclusions about impaired WM from the WAIS-IV.
What do we mean by the word “Shock”?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Runnels, Scott Robert
From one vantage point, a shock is a continuous but drastic change in state variables that occurs over very small time and length scales. These scales and associated changes in state variables can be measured experimentally. From another vantage point, a shock is a mathematical singularity consisting of instantaneous changes in state variables. This more mathematical view gives rise to analytical solutions to idealized problems. And from a third vantage point, a shock is a structure in a hydrocode prediction. Its width depends on the simulation’s grid resolution and artificial viscosity. These three vantage points can be in conflict whenmore » ideas from the associated fields are combined, and yet combining them is an important goal of an integrated modeling program. This presentation explores an example of how models for real materials in the presence of real shocks react to a hydrocode’s numerical shocks of finite width. The presentation will include an introduction to plasticity for the novice, an historical view of plasticity algorithms, a demonstration of how pursuing the meaning of “shock” has resulted in hydrocode improvements, and will conclude by answering some of the questions that arise from that pursuit. After the technical part of the presentation, a few slides advertising LANL’s Computational Physics Student Summer Workshop will be shown.« less
Effects of spatial scale of sampling on food web structure
Wood, Spencer A; Russell, Roly; Hanson, Dieta; Williams, Richard J; Dunne, Jennifer A
2015-01-01
This study asks whether the spatial scale of sampling alters structural properties of food webs and whether any differences are attributable to changes in species richness and connectance with scale. Understanding how different aspects of sampling effort affect ecological network structure is important for both fundamental ecological knowledge and the application of network analysis in conservation and management. Using a highly resolved food web for the marine intertidal ecosystem of the Sanak Archipelago in the Eastern Aleutian Islands, Alaska, we assess how commonly studied properties of network structure differ for 281 versions of the food web sampled at five levels of spatial scale representing six orders of magnitude in area spread across the archipelago. Species (S) and link (L) richness both increased by approximately one order of magnitude across the five spatial scales. Links per species (L/S) more than doubled, while connectance (C) decreased by approximately two-thirds. Fourteen commonly studied properties of network structure varied systematically with spatial scale of sampling, some increasing and others decreasing. While ecological network properties varied systematically with sampling extent, analyses using the niche model and a power-law scaling relationship indicate that for many properties, this apparent sensitivity is attributable to the increasing S and decreasing C of webs with increasing spatial scale. As long as effects of S and C are accounted for, areal sampling bias does not have a special impact on our understanding of many aspects of network structure. However, attention does need be paid to some properties such as the fraction of species in loops, which increases more than expected with greater spatial scales of sampling. PMID:26380704
Vectorlike particles, Z‧ and Yukawa unification in F-theory inspired E6
NASA Astrophysics Data System (ADS)
Karozas, Athanasios; Leontaris, George K.; Shafi, Qaisar
2018-03-01
We explore the low energy implications of an F-theory inspired E6 model whose breaking yields, in addition to the MSSM gauge symmetry, a Z‧ gauge boson associated with a U (1) symmetry broken at the TeV scale. The zero mode spectrum of the effective low energy theory is derived from the decomposition of the 27 and 27 ‾ representations of E6 and we parametrise their multiplicities in terms of a minimum number of flux parameters. We perform a two-loop renormalisation group analysis of the gauge and Yukawa couplings of the effective theory model and estimate lower bounds on the new vectorlike particles predicted in the model. We compute the third generation Yukawa couplings in an F-theory context assuming an E8 point of enhancement and express our results in terms of the local flux densities associated with the gauge symmetry breaking. We find that their values are compatible with the ones computed by the renormalisation group equations, and we identify points in the parameter space of the flux densities where the t - b - τ Yukawa couplings unify.
Frederick, David A; Daniels, Elizabeth A; Bates, Morgan E; Tylka, Tracy L
2017-12-01
Findings conflict as to whether thin-ideal media affect women's body satisfaction. Meta-analyses of experimental studies reveal small or null effects, but many women endorse appearance-related media pressure in surveys. Using a novel approach, two samples of women (Ns=656, 770) were exposed to bikini models, fashion models, or control conditions and reported the effects of the images their body image. Many women reported the fashion/bikini models made them feel worse about their stomachs (57%, 64%), weight (50%, 56%), waist (50%, 56%), overall appearance (50%, 56%), muscle tone (46%, 52%), legs (45%, 48%), thighs (40%, 49%), buttocks (40%, 43%), and hips (40%, 46%). In contrast, few women (1-6%) reported negative effects of control images. In open-ended responses, approximately one-third of women explicitly described negative media effects on their body image. Findings revealed that many women perceive negative effects of thin-ideal media in the immediate aftermath of exposures in experimental settings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modulational instability and discrete breathers in a nonlinear helicoidal lattice model
NASA Astrophysics Data System (ADS)
Ding, Jinmin; Wu, Tianle; Chang, Xia; Tang, Bing
2018-06-01
We investigate the problem on the discrete modulation instability of plane waves and discrete breather modes in a nonlinear helicoidal lattice model, which is described by a discrete nonlinear Schrödinger equation with the first-, second-, and third-neighbor coupling. By means of the linear stability analysis, we present an analytical expression of the instability growth rate and identify the regions of modulational instability of plane waves. It is shown that the introduction of the third-neighbor coupling will affect the shape of the areas of modulational instability significantly. Based on the results obtained by the modulational instability analysis, we predict the existence conditions for the stationary breather modes. Otherwise, by making use of the semidiscrete multiple-scale method, we obtain analytical solutions of discrete breather modes and analyze their properties for different types of nonlinearities. Our results show that the discrete breathers obtained are stable for a long time only when the system exhibits the repulsive nonlinearity. In addition, it is found that the existence of the stable bright discrete breather closely relates to the presence of the third-neighbor coupling.
UStün, Yakup; Erdogan, Ozgür; Esen, Emin; Karsli, Ebru Deniz
2003-11-01
The aim of this study was to compare the effects of intravenous administration of 1.5 mg/kg and 3 mg/kg of methylprednisolone sodium succinate (MP) on pain, swelling, and trismus after third molar surgery. Twenty-six healthy patients with symmetrically impacted mandibular third molars were included in this double-blind, cross-over study. Either 1.5 mg/kg or 3 mg/kg of MP was administered by intravenous route one hour prior to the first operation. At the second operation the other dose was applied. Trismus was determined by measuring maximum interincisal opening and facial swelling was evaluated using a tape measuring method. Pain was determined using visual analogue scale and recording the number of pain pills taken. There was no statistically significant difference in trismus, facial swelling, and pain between the two groups. No clinical benefit of the higher dose of MP was demonstrated.
Nuclear constraints on the age of the universe
NASA Technical Reports Server (NTRS)
Schramm, D. N.
1983-01-01
A review is made of how one can use nuclear physics to put rather stringent limits on the age of the universe and thus the cosmic distance scale. The age can be estimated to a fair degree of accuracy. No single measurement of the time since the Big Bang gives a specific, unambiguous age. There are several methods that together fix the age with surprising precision. In particular, there are three totally independent techniques for estimating an age and a fourth technique which involves finding consistency of the other three in the framework of the standard Big Bang cosmological model. The three independent methods are: cosmological dynamics, the age of the oldest stars, and radioactive dating. This paper concentrates on the third of the three methods, and the consistency technique. Previously announced in STAR as N83-34868
Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick
2007-11-01
We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.
Innovative mathematical modeling in environmental remediation.
Yeh, Gour-Tsyh; Gwo, Jin-Ping; Siegel, Malcolm D; Li, Ming-Hsu; Fang, Yilin; Zhang, Fan; Luo, Wensui; Yabusaki, Steve B
2013-05-01
There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out are used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g., Ni, Cr, Co). The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models
NASA Technical Reports Server (NTRS)
Buchert, T.; Melott, A. L.; Weiss, A. G.
1993-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.
Hong, Hailong; Liu, Qiang; Huang, Lei; Gong, Mali
2013-03-25
We demonstrate the improvement and formation of UV-induced damage on LBO crystal output surface during long-term (130 h) high-power (20 W) high-repetition-rate (80 kHz) third-harmonic generation. The output surface was super-polished (RMS surface roughness <0.6 nm) to sub-nanometer scale super smooth roughness. The surface lifetime has been improved more than 20-fold compared with the as-polished ones (RMS surface roughness 4.0~8.0 nm). The damage could be attributed to the consequence of thermal effects resulted from impurity absorptions. Simultaneously, it was verified that the impurities originated in part from the UV-induced deposition.
Exploration of bounded motion near binary systems comprised of small irregular bodies
NASA Astrophysics Data System (ADS)
Chappaz, Loic; Howell, Kathleen C.
2015-10-01
To investigate the behavior of a spacecraft near a pair of irregular bodies, consider a three-body configuration (one massless). Two massive bodies, P_1 and P_2, form the primary system; each primary is modeled as a sphere or an ellipsoid. Two primary configurations are addressed: `synchronous' and `non-synchronous'. Concepts and tools similar to those applied in the circular restricted three-body problem are exploited to construct periodic trajectories for a third body in synchronous systems. In non-synchronous systems, however, the search for third body periodic orbits is complicated by several factors. The mathematical model for the third-body motion is now time-variant and the motion of P_2 is not trivial.
ERIC Educational Resources Information Center
Scrivener, Susan; Coghlan, Erin
2011-01-01
Only one-third of all students who enter community colleges with the intent to earn a degree or certificate actually meet this goal within six years. MDRC launched the Opening Doors Demonstration in 2003--the first large-scale random assignment study in a community college setting--to tackle this problem. Partnering with six community colleges,…
Noise generated by quiet engine fans. 1: FanB
NASA Technical Reports Server (NTRS)
Montegani, F. J.
1972-01-01
Acoustical tests of full scale fans for jet engines are presented. The fans are described and some aerodynamic operating data are given. Far field noise around the fan was measured for a variety of configurations over a range of operating conditions. Complete results of one third octave band analysis are presented in tabular form. Power spectra and sideline perceived noise levels are included.
Neural-Network Object-Recognition Program
NASA Technical Reports Server (NTRS)
Spirkovska, L.; Reid, M. B.
1993-01-01
HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.
Bazo-Alvarez, Juan Carlos; Bazo-Alvarez, Oscar Alfredo; Aguila, Jeins; Peralta, Frank; Mormontoy, Wilfredo; Bennett, Ian M
2016-01-01
Our aim was to evaluate the psychometric properties of the FACES-III among Peruvian high school students. This is a psychometric cross-sectional study. A probabilistic sampling was applied, defined by three stages: stratum one (school), stratum two (grade) and cluster (section). The participants were 910 adolescent students of both sexes, between 11 and 18 years of age. The instrument was also the object of study: the Olson's FACES-III. The analysis included a review of the structure / construct validity of the measure by factor analysis and assessment of internal consistency (reliability). The real-cohesion scale had moderately high reliability (Ω=.85) while the real-flexibility scale had moderate reliability (Ω=.74). The reliability found for the ideal-cohesion was moderately high (Ω=.89) like for the scale of ideal-flexibility (Ω=.86). Construct validity was confirmed by the goodness of fit of a two factor model (cohesion and flexibility) with 10 items each [Adjusted goodness of fit index (AGFI) = 0.96; Expected Cross Validation Index (ECVI) = 0.87; Normed fit index (NFI) = 0.93; Goodness of fit index (GFI) = 0.97; Root mean square error of approximation (RMSEA) = 0.06]. FACES-III has sufficient reliability and validity to be used in Peruvian adolescents for the purpose of group or individual assessment.
Upscaling: Effective Medium Theory, Numerical Methods and the Fractal Dream
NASA Astrophysics Data System (ADS)
Guéguen, Y.; Ravalec, M. Le; Ricard, L.
2006-06-01
Upscaling is a major issue regarding mechanical and transport properties of rocks. This paper examines three issues relative to upscaling. The first one is a brief overview of Effective Medium Theory (EMT), which is a key tool to predict average rock properties at a macroscopic scale in the case of a statistically homogeneous medium. EMT is of particular interest in the calculation of elastic properties. As discussed in this paper, EMT can thus provide a possible way to perform upscaling, although it is by no means the only one, and in particular it is irrelevant if the medium does not adhere to statistical homogeneity. This last circumstance is examined in part two of the paper. We focus on the example of constructing a hydrocarbon reservoir model. Such a construction is a required step in the process of making reasonable predictions for oil production. Taking into account rock permeability, lithological units and various structural discontinuities at different scales is part of this construction. The result is that stochastic reservoir models are built that rely on various numerical upscaling methods. These methods are reviewed. They provide techniques which make it possible to deal with upscaling on a general basis. Finally, a last case in which upscaling is trivial is considered in the third part of the paper. This is the fractal case. Fractal models have become popular precisely because they are free of the assumption of statistical homogeneity and yet do not involve numerical methods. It is suggested that using a physical criterion as a means to discriminate whether fractality is a dream or reality would be more satisfactory than relying on a limited data set alone.
Chamberlain, Diane; Williams, Allison; Stanley, David; Mellor, Peter; Cross, Wendy; Siegloff, Lesley
2016-10-01
Nursing students will graduate into stressful workplace environments and resilience is an essential acquired ability for surviving the workplace. Few studies have explored the relationship between resilience and the degree of innate dispositional mindfulness, compassion, compassion fatigue and burnout in nursing students, including those who find themselves in the position of needing to work in addition to their academic responsibilities. This paper investigates the predictors of resilience, including dispositional mindfulness and employment status of third year nursing students from three Australian universities. Participants were 240 undergraduate, third year, nursing students. Participants completed a resilience measure (Connor-Davidson Resilience Scale, CD-RISC), measures of dispositional mindfulness (Cognitive and Affective Mindfulness Scale Revised, CAMS-R) and professional quality of life (The Professional Quality of Life Scale version 5, PROQOL5), such as compassion satisfaction, compassion fatigue and burnout. An observational quantitative successive independent samples survey design was employed. A stepwise linear regression was used to evaluate the extent to which predictive variables were related each to resilience. The predictive model explained 57% of the variance in resilience. Dispositional mindfulness subset acceptance made the strongest contribution, followed by the expectation of a graduate nurse transition programme acceptance, with dispositional mindfulness total score and employment greater than 20 hours per week making the smallest contribution. This was a resilient group of nursing students who rated high with dispositional mindfulness and exhibited hopeful and positive aspirations for obtaining a position in a competitive graduate nurse transition programme after graduation.
Overby, Megan; Carrell, Thomas; Bernthal, John
2007-10-01
This study examined 2nd-grade teachers' perceptions of the academic, social, and behavioral competence of students with speech sound disorders (SSDs). Forty-eight 2nd-grade teachers listened to 2 groups of sentences differing by intelligibility and pitch but spoken by a single 2nd grader. For each sentence group, teachers rated the speaker's academic, social, and behavioral competence using an adapted version of the Teacher Rating Scale of the Self-Perception Profile for Children (S. Harter, 1985) and completed 3 open-ended questions. The matched-guise design controlled for confounding speaker and stimuli variables that were inherent in prior studies. Statistically significant differences in teachers' expectations of children's academic, social, and behavioral performances were found between moderately intelligible and normal intelligibility speech. Teachers associated moderately intelligible low-pitched speech with more behavior problems than moderately intelligible high-pitched speech or either pitch with normal intelligibility. One third of the teachers reported that they could not accurately predict a child's school performance based on the child's speech skills, one third of the teachers causally related school difficulty to SSD, and one third of the teachers made no comment. Intelligibility and speaker pitch appear to be speech variables that influence teachers' perceptions of children's school performance.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Parallel Computation of the Regional Ocean Modeling System (ROMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Song, Y T; Chao, Y
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less
Equivalence principle implications of modified gravity models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hui, Lam; Nicolis, Alberto; Stubbs, Christopher W.
2009-11-15
Theories that attempt to explain the observed cosmic acceleration by modifying general relativity all introduce a new scalar degree of freedom that is active on large scales, but is screened on small scales to match experiments. We demonstrate that if such screening occurs via the chameleon mechanism, such as in f(R) theory, it is possible to have order unity violation of the equivalence principle, despite the absence of explicit violation in the microscopic action. Namely, extended objects such as galaxies or constituents thereof do not all fall at the same rate. The chameleon mechanism can screen the scalar charge formore » large objects but not for small ones (large/small is defined by the depth of the gravitational potential and is controlled by the scalar coupling). This leads to order one fluctuations in the ratio of the inertial mass to gravitational mass. We provide derivations in both Einstein and Jordan frames. In Jordan frame, it is no longer true that all objects move on geodesics; only unscreened ones, such as test particles, do. In contrast, if the scalar screening occurs via strong coupling, such as in the Dvali-Gabadadze-Porrati braneworld model, equivalence principle violation occurs at a much reduced level. We propose several observational tests of the chameleon mechanism: 1. small galaxies should accelerate faster than large galaxies, even in environments where dynamical friction is negligible; 2. voids defined by small galaxies would appear larger compared to standard expectations; 3. stars and diffuse gas in small galaxies should have different velocities, even if they are on the same orbits; 4. lensing and dynamical mass estimates should agree for large galaxies but disagree for small ones. We discuss possible pitfalls in some of these tests. The cleanest is the third one where the mass estimate from HI rotational velocity could exceed that from stars by 30% or more. To avoid blanket screening of all objects, the most promising place to look is in voids.« less
Measurement of the Top Quark Mass in the All Hadronic Channel at the Tevatron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lungu, Gheorghe
2007-01-01
This study presents a measurement of the top quark mass in the all hadronic channel of the top quark pair production mechanism, using 1 fb -1 of pmore » $$\\bar{p}$$ collisions at √s =1.96 TeV collected at the Collider Detector at Fermilab (CDF). Few novel techniques have been used in this measurement. A template technique was used to simultaneously determine the mass of the top quark and the energy scale of the jets. Two sets of distributions have been parameterized as a function of the top quark mass and jet energy scale. One set of distributions is built from the event-by-event reconstructed top masses, determined using the Standard Model matrix element for the t$$\\bar{t}$$ all hadronic process. This set is sensitive to changes in the value of the top quark mass. The other set of distributions is sensitive to changes in the scale of jet energies and is built from the invariant mass of pairs of light flavor jets, providing an in situ calibration of the jet energy scale. The energy scale of the measured jets in the final state is expressed in units of its uncertainty, sigmac. The measured mass of the top quark is 171.1±3.7(stat.unc.)±2.1(syst.unc.) GeV/ c 2 and to the date represents the most precise mass measurement in the all hadronic channel and third best overall.« less
Advances in understanding, models and parameterisations of biosphere-atmosphere ammonia exchange
NASA Astrophysics Data System (ADS)
Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.
2013-03-01
Atmospheric ammonia (NH3) dominates global emissions of total reactive nitrogen (Nr), while emissions from agricultural production systems contribute about two thirds of global NH3 emissions; the remaining third emanates from oceans, natural vegetation, humans, wild animals and biomass burning. On land, NH3 emitted from the various sources eventually returns to the biosphere by dry deposition to sink areas, predominantly semi-natural vegetation, and by wet and dry deposition as ammonium (NH4+) to all surfaces. However, the land/atmosphere exchange of gaseous NH3 is in fact bi-directional over unfertilized as well as fertilized ecosystems, with periods and areas of emission and deposition alternating in time (diurnal, seasonal) and space (patchwork landscapes). The exchange is controlled by a range of environmental factors, including meteorology, surface layer turbulence, thermodynamics, air and surface heterogeneous-phase chemistry, canopy geometry, plant development stage, leaf age, organic matter decomposition, soil microbial turnover, and, in agricultural systems, by fertilizer application rate, fertilizer type, soil type, crop type, and agricultural management practices. We review the range of processes controlling NH3 emission and uptake in the different parts of the soil-canopy-atmosphere continuum, with NH3 emission potentials defined at the substrate and leaf levels by different [NH4+] / [H+] ratios (Γ). Surface/atmosphere exchange models for NH3 are necessary to compute the temporal and spatial patterns of emissions and deposition at the soil, plant, field, landscape, regional and global scales, in order to assess the multiple environmental impacts of air-borne and deposited NH3 and NH4+. Models of soil/vegetation/atmosphereem NH3 exchange are reviewed from the substrate and leaf scales to the global scale. They range from simple steady-state, "big leaf" canopy resistance models, to dynamic, multi-layer, multi-process, multi-chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterisation, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.
Advances in understanding, models and parameterizations of biosphere-atmosphere ammonia exchange
NASA Astrophysics Data System (ADS)
Flechard, C. R.; Massad, R.-S.; Loubet, B.; Personne, E.; Simpson, D.; Bash, J. O.; Cooter, E. J.; Nemitz, E.; Sutton, M. A.
2013-07-01
Atmospheric ammonia (NH3) dominates global emissions of total reactive nitrogen (Nr), while emissions from agricultural production systems contribute about two-thirds of global NH3 emissions; the remaining third emanates from oceans, natural vegetation, humans, wild animals and biomass burning. On land, NH3 emitted from the various sources eventually returns to the biosphere by dry deposition to sink areas, predominantly semi-natural vegetation, and by wet and dry deposition as ammonium (NH4+) to all surfaces. However, the land/atmosphere exchange of gaseous NH3 is in fact bi-directional over unfertilized as well as fertilized ecosystems, with periods and areas of emission and deposition alternating in time (diurnal, seasonal) and space (patchwork landscapes). The exchange is controlled by a range of environmental factors, including meteorology, surface layer turbulence, thermodynamics, air and surface heterogeneous-phase chemistry, canopy geometry, plant development stage, leaf age, organic matter decomposition, soil microbial turnover, and, in agricultural systems, by fertilizer application rate, fertilizer type, soil type, crop type, and agricultural management practices. We review the range of processes controlling NH3 emission and uptake in the different parts of the soil-canopy-atmosphere continuum, with NH3 emission potentials defined at the substrate and leaf levels by different [NH4+] / [H+] ratios (Γ). Surface/atmosphere exchange models for NH3 are necessary to compute the temporal and spatial patterns of emissions and deposition at the soil, plant, field, landscape, regional and global scales, in order to assess the multiple environmental impacts of airborne and deposited NH3 and NH4+. Models of soil/vegetation/atmosphere NH3 exchange are reviewed from the substrate and leaf scales to the global scale. They range from simple steady-state, "big leaf" canopy resistance models, to dynamic, multi-layer, multi-process, multi-chemical species schemes. Their level of complexity depends on their purpose, the spatial scale at which they are applied, the current level of parameterization, and the availability of the input data they require. State-of-the-art solutions for determining the emission/sink Γ potentials through the soil/canopy system include coupled, interactive chemical transport models (CTM) and soil/ecosystem modelling at the regional scale. However, it remains a matter for debate to what extent realistic options for future regional and global models should be based on process-based mechanistic versus empirical and regression-type models. Further discussion is needed on the extent and timescale by which new approaches can be used, such as integration with ecosystem models and satellite observations.
Localized Scale Coupling and New Educational Paradigms in Multiscale Mathematics and Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
LEAL, L. GARY
2013-06-30
One of the most challenging multi-scale simulation problems in the area of multi-phase materials is to develop effective computational techniques for the prediction of coalescence and related phenomena involving rupture of a thin liquid film due to the onset of instability driven by van der Waals or other micro-scale attractive forces. Accurate modeling of this process is critical to prediction of the outcome of milling processes for immiscible polymer blends, one of the most important routes to new advanced polymeric materials. In typical situations, the blend evolves into an ?emulsion? of dispersed phase drops in a continuous matrix fluid. Coalescencemore » is then a critical factor in determining the size distribution of the dispersed phase, but is extremely difficult to predict from first principles. The thin film separating two drops may only achieve rupture at dimensions of approximately 10 nm while the drop sizes are 0(10 ?m). It is essential to achieve very accurate solutions for the flow and for the interface shape at both the macroscale of the full drops, and within the thin film (where the destabilizing disjoining pressure due to van der Waals forces is proportional approximately to the inverse third power of the local film thickness, h-3). Furthermore, the fluids of interest are polymeric (through Newtonian) and the classical continuum description begins to fail as the film thins ? requiring incorporation of molecular effects, such as a hybrid code that incorporates a version of coarse grain molecular dynamics within the thin film coupled with a classical continuum description elsewhere in the flow domain. Finally, the presence of surface active additions, either surfactants (in the form of di-block copolymers) or surface-functionalized micro- or nano-scale particles, adds an additional level of complexity, requiring development of a distinct numerical method to predict the nonuniform concentration gradients of these additives that are responsible for Marangoni stresses at the interface. Again, the physical dimensions of these additives may become comparable to the thin film dimensions, requiring an additional layer of multi-scale modeling.« less
A potential-energy scaling model to simulate the initial stages of thin-film growth
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Outlaw, R. A.; Walker, G. H.
1983-01-01
A solid on solid (SOS) Monte Carlo computer simulation employing a potential energy scaling technique was used to model the initial stages of thin film growth. The model monitors variations in the vertical interaction potential that occur due to the arrival or departure of selected adatoms or impurities at all sites in the 400 sq. ft. array. Boltzmann ordered statistics are used to simulate fluctuations in vibrational energy at each site in the array, and the resulting site energy is compared with threshold levels of possible atomic events. In addition to adsorption, desorption, and surface migration, adatom incorporation and diffusion of a substrate atom to the surface are also included. The lateral interaction of nearest, second nearest, and third nearest neighbors is also considered. A series of computer experiments are conducted to illustrate the behavior of the model.
Positive mental health and well-being among a third level student population.
Davoren, Martin P; Fitzgerald, Eimear; Shiely, Frances; Perry, Ivan J
2013-01-01
Much research on the health and well-being of third level students is focused on poor mental health leading to a dearth of information on positive mental health and well-being. Recently, the Warwick Edinburgh Mental Well-being scale (WEMWBS) was developed as a measurement of positive mental health and well-being. The aim of this research is to investigate the distribution and determinants of positive mental health and well-being in a large, broadly representative sample of third level students using WEMWBS. Undergraduate students from one large third level institution were sampled using probability proportional to size sampling. Questionnaires were distributed to students attending lectures in the randomly selected degrees. A total of 2,332 self-completed questionnaires were obtained, yielding a response rate of 51% based on students registered to relevant modules and 84% based on attendance. One-way ANOVAs and multivariate logistic regression were utilised to investigate factors associated with positive mental health and well-being. The sample was predominantly female (62.66%), in first year (46.9%) and living in their parents' house (42.4%) or in a rented house or flat (40.8%). In multivariate analysis adjusted for age and stratified by gender, no significant differences in WEMWBS score were observed by area of study, alcohol, smoking or drug use. WEMWBS scores were higher among male students with low levels of physical activity (p=0.04). Men and women reporting one or more sexual partners (p<0.001) were also more likely to report above average mental health and well-being. This is the first study to examine positive mental health and well-being scores in a third level student sample using WEMWBS. The findings suggest that students with a relatively adverse health and lifestyle profile have higher than average mental health and well-being. To confirm these results, this work needs to be replicated across other third level institutions.
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
Use of one-third octave-band spectral data in community noise models
DOT National Transportation Integrated Search
2003-08-25
Airport noise planning models typically use guidance contained in the Society of Automotive Engineers (SAE), : Airspace Information Report (AIR), SAE-1845, titled Procedure for the Calculation of Airplane Noise in the : Vicinity of Airports [1]. T...
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.
NASA Astrophysics Data System (ADS)
Fan, X.; Chen, L.; Ma, Z.
2010-12-01
Climate downscaling has been an active research and application area in the past several decades focusing on regional climate studies. Dynamical downscaling, in addition to statistical methods, has been widely used in downscaling as the advanced modern numerical weather and regional climate models emerge. The utilization of numerical models enables that a full set of climate variables are generated in the process of downscaling, which are dynamically consistent due to the constraints of physical laws. While we are generating high resolution regional climate, the large scale climate patterns should be retained. To serve this purpose, nudging techniques, including grid analysis nudging and spectral nudging, have been used in different models. There are studies demonstrating the benefit and advantages of each nudging technique; however, the results are sensitive to many factors such as nudging coefficients and the amount of information to nudge to, and thus the conclusions are controversy. While in a companion work of developing approaches for quantitative assessment of the downscaled climate, in this study, the two nudging techniques are under extensive experiments in the Weather Research and Forecasting (WRF) model. Using the same model provides fair comparability. Applying the quantitative assessments provides objectiveness of comparison. Three types of downscaling experiments were performed for one month of choice. The first type is serving as a base whereas the large scale information is communicated through lateral boundary conditions only; the second is using the grid analysis nudging; and the third is using spectral nudging. Emphases are given to the experiments of different nudging coefficients and nudging to different variables in the grid analysis nudging; while in spectral nudging, we focus on testing the nudging coefficients, different wave numbers on different model levels to nudge.
Skobel, Erik; Kamke, Wolfram; Bönner, Gerd; Alt, Bernd; Purucker, Hans-Christian; Schwaab, Bernhard; Einwang, Hans-Peter; Schröder, Klaus; Langheim, Eike; Völler, Heinz; Brandenburg, Alexandra; Graml, Andrea; Woehrle, Holger; Krüger, Stefan
2015-07-01
To determine the prevalence of, and the risk factors for, sleep apnoea in cardiac rehabilitation (CR) facilities in Germany. 1152 patients presenting for CR were screened for sleep-disordered breathing with 2-channel polygraphy (ApneaLink™; ResMed). Parameters recorded included the apnoea-hypopnoea index (AHI), number of desaturations per hour of recording (ODI), mean and minimum nocturnal oxygen saturation and number of snoring episodes. Patients rated subjective sleep quality on a scale from 1 (poor) to 10 (best) and completed the Epworth Sleepiness Scale (ESS). Clinically significant sleep apnoea (AHI ≥15/h) was documented in 33% of patients. Mean AHI was 14 ± 16/h (range 0-106/h). Sleep apnoea was defined as being of moderate severity in 18% of patients (AHI ≥15-29/h) and severe in 15% (AHI ≥30/h). There were small, but statistically significant, differences in ESS score and subjective sleep quality between patients with and without sleep apnoea. Logistic regression model analysis identified the following as risk factors for sleep apnoea in CR patients: age (per 10 years) (odds ratio (OR) 1.51; p<0.001), body mass index (per 5 units) (OR 1.31; p=0.001), male gender (OR 2.19; p<0.001), type 2 diabetes mellitus (OR 1.45; p=0.040), haemoglobin level (OR 0.91; p=0.012) and witnessed apnoeas (OR 1.99; p<0.001). The findings of this study indicate that more than one-third of patients undergoing cardiac rehabilitation in Germany have sleep apnoea, with one-third having moderate-to-severe SDB that requires further evaluation or intervention. Inclusion of sleep apnoea screening as part of cardiac rehabilitation appears to be appropriate. © The European Society of Cardiology 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
The EMEP MSC-W chemical transport model - Part 1: Model description
NASA Astrophysics Data System (ADS)
Simpson, D.; Benedictow, A.; Berge, H.; Bergström, R.; Emberson, L. D.; Fagerli, H.; Hayman, G. D.; Gauss, M.; Jonson, J. E.; Jenkin, M. E.; Nyíri, A.; Richter, C.; Semeena, V. S.; Tsyro, S.; Tuovinen, J.-P.; Valdebenito, Á.; Wind, P.
2012-02-01
The Meteorological Synthesizing Centre-West (MSC-W) of the European Monitoring and Evaluation Programme (EMEP) has been performing model calculations in support of the Convention on Long Range Transboundary Air Pollution (CLRTAP) for more than 30 yr. The EMEP MSC-W chemical transport model is still one of the key tools within European air pollution policy assessments. Traditionally, the EMEP model has covered all of Europe with a resolution of about 50 × 50 km2, and extending vertically from ground level to the tropopause (100 hPa). The model has undergone substantial development in recent years, and is now applied on scales ranging from local (ca. 5 km grid size) to global (with 1 degree resolution). The model is used to simulate photo-oxidants and both inorganic and organic aerosols. In 2008 the EMEP model was released for the first time as public domain code, along with all required input data for model runs for one year. Since then, many changes have been made to the model physics, and input data. The second release of the EMEP MSC-W model became available in mid 2011, and a new release is targeted for early 2012. This publication is intended to document this third release of the EMEP MSC-W model. The model formulations are given, along with details of input data-sets which are used, and brief background on some of the choices made in the formulation are presented. The model code itself is available at www.emep.int, along with the data required to run for a full year over Europe.
AQMEII3 evaluation of regional NA/EU simulations and ...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac
NASA Technical Reports Server (NTRS)
Hartle, R. E.; Ogilvie, K. W.; Scudder, J. D.; Bridge, H. S.; Siscoe, G. L.; Lazarus, A. J.; Vasyliunas, V. M.; Yeates, C. M.
1975-01-01
Plasma electron count observations made during the first and third encounters of Mariner 10 with Mercury (i.e., during Mercury I and III) are reported. They provide detailed information on the magnetosphere of Mercury, especially those from Mercury III. A low-flux region was observed about closest approach (CA) of Mercury III, whereas no such region was detected by the lower-latitude Mercury I; a hot plasma sheet was measured on the outgoing (and near-equator) trajectory of Mercury I, while only cool plasma sheets were observed in the magnetosphere by Mercury III. Findings are similar, on a reduced scale, to models of the earth's magnetosphere and magnetosheath.
NASA Technical Reports Server (NTRS)
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
Toward Improved Fidelity of Thermal Explosion Simulations
NASA Astrophysics Data System (ADS)
Nichols, Albert; Becker, Richard; Burnham, Alan; Howard, W. Michael; Knap, Jarek; Wemhoff, Aaron
2009-06-01
We present results of an improved thermal/chemical/mechanical model of HMX based explosives like LX04 and LX10 for thermal cook-off. The original HMX model and analysis scheme were developed by Yoh et.al. for use in the ALE3D modeling framework. The improvements were concentrated in four areas. First, we added porosity to the chemical material model framework in ALE3D used to model HMX explosive formulations to handle the roughly 2% porosity in solid explosives. Second, we improved the HMX reaction network, which included the addition of a reactive phase change model base on work by Henson et.al. Third, we added early decomposition gas species to the CHEETAH material database to improve equations of state for gaseous intermediates and products. Finally, we improved the implicit mechanics module in ALE3D to more naturally handle the long time scales associated with thermal cookoff. The application of the resulting framework to the analysis of the Scaled Thermal Explosion (STEX) experiments will be discussed.
A test-bed modeling study for wave resource assessment
NASA Astrophysics Data System (ADS)
Yang, Z.; Neary, V. S.; Wang, T.; Gunawan, B.; Dallman, A.
2016-02-01
Hindcasts from phase-averaged wave models are commonly used to estimate standard statistics used in wave energy resource assessments. However, the research community and wave energy converter industry is lacking a well-documented and consistent modeling approach for conducting these resource assessments at different phases of WEC project development, and at different spatial scales, e.g., from small-scale pilot study to large-scale commercial deployment. Therefore, it is necessary to evaluate current wave model codes, as well as limitations and knowledge gaps for predicting sea states, in order to establish best wave modeling practices, and to identify future research needs to improve wave prediction for resource assessment. This paper presents the first phase of an on-going modeling study to address these concerns. The modeling study is being conducted at a test-bed site off the Central Oregon Coast using two of the most widely-used third-generation wave models - WaveWatchIII and SWAN. A nested-grid modeling approach, with domain dimension ranging from global to regional scales, was used to provide wave spectral boundary condition to a local scale model domain, which has a spatial dimension around 60km by 60km and a grid resolution of 250m - 300m. Model results simulated by WaveWatchIII and SWAN in a structured-grid framework are compared to NOAA wave buoy data for the six wave parameters, including omnidirectional wave power, significant wave height, energy period, spectral width, direction of maximum directionally resolved wave power, and directionality coefficient. Model performance and computational efficiency are evaluated, and the best practices for wave resource assessments are discussed, based on a set of standard error statistics and model run times.
Columnar organization of orientation domains in V1
NASA Astrophysics Data System (ADS)
Liedtke, Joscha; Wolf, Fred
In the primary visual cortex (V1) of primates and carnivores, the functional architecture of basic stimulus selectivities appears similar across cortical layers (Hubel & Wiesel, 1962) justifying the use of two-dimensional cortical models and disregarding organization in the third dimension. Here we show theoretically that already small deviations from an exact columnar organization lead to non-trivial three-dimensional functional structures. We extend two-dimensional random field models (Schnabel et al., 2007) to a three-dimensional cortex by keeping a typical scale in each layer and introducing a correlation length in the third, columnar dimension. We examine in detail the three-dimensional functional architecture for different cortical geometries with different columnar correlation lengths. We find that (i) topological defect lines are generally curved and (ii) for large cortical curvatures closed loops and reconnecting topological defect lines appear. This theory extends the class of random field models by introducing a columnar dimension and provides a systematic statistical assessment of the three-dimensional functional architecture of V1 (see also (Tanaka et al., 2011)).
Climate change induced transformations of agricultural systems: insights from a global model
NASA Astrophysics Data System (ADS)
Leclère, D.; Havlík, P.; Fuss, S.; Schmid, E.; Mosnier, A.; Walsh, B.; Valin, H.; Herrero, M.; Khabarov, N.; Obersteiner, M.
2014-12-01
Climate change might impact crop yields considerably and anticipated transformations of agricultural systems are needed in the coming decades to sustain affordable food provision. However, decision-making on transformational shifts in agricultural systems is plagued by uncertainties concerning the nature and geography of climate change, its impacts, and adequate responses. Locking agricultural systems into inadequate transformations costly to adjust is a significant risk and this acts as an incentive to delay action. It is crucial to gain insight into how much transformation is required from agricultural systems, how robust such strategies are, and how we can defuse the associated challenge for decision-making. While implementing a definition related to large changes in resource use into a global impact assessment modelling framework, we find transformational adaptations to be required of agricultural systems in most regions by 2050s in order to cope with climate change. However, these transformations widely differ across climate change scenarios: uncertainties in large-scale development of irrigation span in all continents from 2030s on, and affect two-thirds of regions by 2050s. Meanwhile, significant but uncertain reduction of major agricultural areas affects the Northern Hemisphere’s temperate latitudes, while increases to non-agricultural zones could be large but uncertain in one-third of regions. To help reducing the associated challenge for decision-making, we propose a methodology exploring which, when, where and why transformations could be required and uncertain, by means of scenario analysis.
López-Cepero, Javier; Fabelo, Humberto Eduardo; Rodríguez-Franco, Luis; Rodríguez-Díaz, F Javier
2016-01-01
This study provides psychometric information for the Dating Violence Questionnaire (DVQ), an instrument developed to assess intimate partner victimization among adolescents and youths. This instrument, an English version of Cuestionario de Violencia de Novios, assesses both frequency and discomfort associated with 8 types of abuse (detachment, humiliation, sexual, coercion, physical, gender-based, emotional punishment, and instrumental). Participant included 859 U.S. students enrolled in undergraduate psychology courses in a mid-Atlantic university (M = 19 years; SD = 1.5 years). One-third of the participants were males, and two-thirds were females. Regarding racial identity, around 55% of participants identified themselves as White, 22% as African American, 12% as Asian, whereas 11% selected other identities. Around 9% of participants identified themselves as Hispanic. Confirmatory factor analysis shows that the DVQ achieved adequate goodness-of-fit indexes for the original eight-factor model (X(2)/df <5; root mean square error of approximation [RMSEA] <.080), as well as higher parsimony when compared to simpler alternative models. The 8 scales demonstrated acceptable internal consistency indexes (α >.700), surpassing those found in the original Spanish validation. Descriptive analysis suggests higher victimization experience on subtle aggressions (detachment, coercion, and emotional punishment), with overt abuses (physical, instrumental) obtaining the smallest means; these findings were similar across gender, race identity, and ethnicity. Results of this validation study encourage the inclusion of DVQ in both research and applied contexts.
McCall-Hosenfeld, Jennifer S; Phiri, Kristen; Schaefer, Eric; Zhu, Junjia; Kjerulff, Kristen
2016-11-01
Postpartum depression (PPD) is a common complication of childbearing, but the course of PPD is not well understood. We analyze trajectories of depression and key risk factors associated with these trajectories in the peripartum and postpartum period. Women in The First Baby Study, a cohort of 3006 women pregnant with their first baby, completed telephone surveys measuring depression during the mother's third trimester, and at 1, 6, and 12 months postpartum. Depression was assessed using the Edinburgh Postnatal Depression Scale. A semiparametric mixture model was used to estimate distinct group-based developmental trajectories of depression and determine whether trajectory group membership varied according to maternal characteristics. A total of 2802 (93%) of mothers completed interviews through 12 months. The mixture model indicated six distinct depression trajectories. A history of anxiety or depression, unattached marital status, and inadequate social support were significantly associated with higher odds of belonging to trajectory groups with greater depression. Most of the depression trajectories were stable or slightly decreased over time, but one depression trajectory, encompassing 1.7% of the mothers, showed women who were nondepressed at the third trimester, but became depressed at 6 months postpartum and were increasingly depressed at 12 months after birth. This trajectory study indicates that women who are depressed during pregnancy tend to remain depressed during the first year postpartum or improve slightly, but an important minority of women become newly and increasingly depressed over the course of the first year after first childbirth.
Katwate, U; Jadhav, S; Kumkar, P; Raghavan, R; Dahanukar, N
2016-05-01
Pethia sanjaymoluri, a new cyprinid, is described from the Pavana and Nira tributaries of Bhima River, Krishna drainage, Maharashtra, India. It can be distinguished from congeners by a combination of characteristics that includes an incomplete lateral line, absence of barbels, upper lip thick and fleshy, 23-25 lateral series scales, 7-12 lateral-line pored scales, 10 predorsal scales, 11-14 prepelvic scales, 17-20 pre-anal scales, 4½ scales between dorsal-fin origin and lateral line, four scales between lateral line and pelvic-fin origin, 8-15 pairs of serrae on distal half of dorsal-fin spine, 12-14 branched pectoral-fin rays, 4 + 26 total vertebrae, 4 + 5 predorsal vertebrae, 4 + 13 abdominal vertebrae, 13 caudal vertebrae and a unique colour pattern comprising a humeral spot positioned below the lateral line and encompassing the third and fourth lateral-line scales and one scale below, one caudal spot on 17th-21st lateral-line scales with a yellow hue on its anterior side and apical half of dorsal fin studded with melanophores making the fin tip appear black. Genetic analysis based on the mitochondrial cytochrome b gene sequence suggests that the species is distinct from other known species of Pethia for which data are available. © 2016 The Fisheries Society of the British Isles.
Chapter 4. Predicting post-fire erosion and sedimentation risk on a landscape scale
MacDonald, L.H.; Sampson, R.; Brady, D.; Juarros, L.; Martin, Deborah
2000-01-01
Historic fire suppression efforts have increased the likelihood of large wildfires in much of the western U.S. Post-fire soil erosion and sedimentation risks are important concerns to resource managers. In this paper we develop and apply procedures to predict post-fire erosion and sedimentation risks on a pixel-, catchment-, and landscape-scale in central and western Colorado.Our model for predicting post-fire surface erosion risk is conceptually similar to the Revised Universal Soil Loss Equation (RUSLE). One key addition is the incorporation of a hydrophobicity risk index (HY-RISK) based on vegetation type, predicted fire severity, and soil texture. Post-fire surface erosion risk was assessed for each 90-m pixel by combining HYRISK, slope, soil erodibility, and a factor representing the likely increase in soil wetness due to removal of the vegetation. Sedimentation risk was a simple function of stream gradient. Composite surface erosion and sedimentation risk indices were calculated and compared across the 72 catchments in the study area.When evaluated on a catchment scale, two-thirds of the catchments had relatively little post-fire erosion risk. Steeper catchments with higher fuel loadings typically had the highest post-fire surface erosion risk. These were generally located along the major north-south mountain chains and, to a lesser extent, in west-central Colorado. Sedimentation risks were usually highest in the eastern part of the study area where a higher proportion of streams had lower gradients. While data to validate the predicted erosion and sedimentation risks are lacking, the results appear reasonable and are consistent with our limited field observations. The models and analytic procedures can be readily adapted to other locations and should provide useful tools for planning and management at both the catchment and landscape scale.
Recent transonic unsteady pressure measurements at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Sandford, M. C.; Ricketts, R. H.; Hess, R. W.
1985-01-01
Four semispan wing model configurations were studied in the Transonic Dynamics Tunnel (TDT). The first model had a clipped delta planform with a circular arc airfoil, the second model had a high aspect ratio planform with a supercritical airfoil, the third model has a rectangular planform with a supercritical airfoil and the fourth model had a high aspect ratio planform with a supercritical airfoil. To generate unsteady flow, the first and third models were equipped with pitch oscillation mechanisms and the first, second and fourth models were equipped with control surface oscillation mechanisms. The fourth model was similar in planform and airfoil shape to the second model, but it is the only one of the four models that has an elastic wing structure. The unsteady pressure studies of the four models are described and some typical results for each model are presented. Comparison of selected experimental data with analytical results also are included.
Differentiating G-inflation from string gas cosmology using the effective field theory approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Minxi; Liu, Junyu; Lu, Shiyun
A characteristic signature of String Gas Cosmology is primordial power spectra for scalar and tensor modes which are almost scale-invariant but with a red tilt for scalar modes but a blue tilt for tensor modes. This feature, however, can also be realized in the so-called G-inflation model, in which Horndeski operators are introduced which leads to a blue tensor tilt by softly breaking the Null Energy Condition. In this article we search for potential observational differences between these two cosmologies by performing detailed perturbation analyses based on the Effective Field Theory approach. Our results show that, although both two modelsmore » produce blue tilted tensor perturbations, they behave differently in three aspects. Firstly, String Gas Cosmology predicts a specific consistency relation between the index of the scalar modes n {sub s} and that of tensor ones n {sub t} , which is hard to be reproduced by G-inflation. Secondly, String Gas Cosmology typically predicts non-Gaussianities which are highly suppressed on observable scales, while G-inflation gives rise to observationally large non-Gaussianities because the kinetic terms in the action become important during inflation. However, after finely tuning the model parameters of G-inflation it is possible to obtain a blue tensor spectrum and negligible non-Gaussianities with a degeneracy between the two models. This degeneracy can be broken by a third observable, namely the scale dependence of the nonlinearity parameter, which vanishes for G-inflation but has a blue tilt in the case of String Gas Cosmology. Therefore, we conclude that String Gas Cosmology is in principle observationally distinguishable from the single field inflationary cosmology, even allowing for modifications such as G-inflation.« less
A community college model to support nursing workforce diversity.
Colville, Janet; Cottom, Sherry; Robinette, Teresa; Wald, Holly; Waters, Tomi
2015-02-01
Community College of Allegheny County (CCAC), Allegheny Campus, is situated on the North Side of Pittsburgh. The neighborhood is 60% African American. At the time of the Health Resources and Services Administration (HRSA) application, approximately one third of the students admitted to the program were African American, less than one third of whom successfully completed it. With the aid of HRSA funding, CCAC developed a model that significantly improved the success rate of disadvantaged students. Through the formation of a viable cohort, the nursing faculty nurtured success among the most at-risk students. The cohort was supported by a social worker, case managers who were nursing faculty, and tutors. Students formed study groups, actively participated in community activities, and developed leadership skills through participation in the Student Nurse Association of Pennsylvania. This article provides the rationale for the Registered Nurse (RN) Achievement Model, describes the components of RN Achievement, and discusses the outcomes of the initiative.
Biomechanical analysis of suture locations of the distal plantar fascia in partial foot.
Guo, Jun-Chao; Wang, Li-Zhen; Mo, Zhong-Jun; Chen, Wei; Fan, Yu-Bo
2015-12-01
The aim of this study was to evaluate the rationality of the suture locations of distal plantar fascia (DPF) after foot amputation to avoid the risk factors of re-amputation or plantar fasciitis. The tensile strain of plantar fascia (PF) in the different regions was measured by uni-axial tensile experiment. A three-dimensional (3D) finite element model was also developed to simulate tensile behaviour of PF in weight bearing conditions. The model includes 12 bones, ligaments, PF, cartilage and soft tissues. Four suture location models for the DPF were considered: the fourth and fifth DPF were sutured on the third metatarsal, the cuboid, and both the third metatarsal and the cuboid, and one un-sutured model. The peak tensile strain of the first, second and third PF was 0.134, 0.128 and 0.138 based on the mechanical test, respectively. The fourth and fifth DPF sutured at the cuboid and the third metatarsal could offer more favourable outcomes. The peak strain of 4.859 × 10(-2), 2.347 × 10(-2) and 1.364 × 10(-2) in the first, second and third PF showed the least outcomes in stance phase. Also, peak strain and stress of the residual PF reduced to 4.859 × 10(-2) and 1.834 MPa, respectively. The stress region was redistributed on the mid-shaft of the first and third PF and the peak stress of medial cuneiform bone evidently decreased. The fourth and fifth DPF suture at the third metatarsal and cuboid was appropriate for the partial foot. The findings are expected to suggest optimal surgical plan of the DPF suture and guide further therapeutic planning of partial foot patients.
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
A geotechnical centrifuge was used to investigate large body impacts onto planetary surfaces. At elevated gravity, it is possible to match various dimensionless similarity parameters which were shown to govern large scale impacts. Observations of crater growth and target flow fields have provided detailed and critical tests of a complete and unified scaling theory for impact cratering. Scaling estimates were determined for nonporous targets. Scaling estimates for large scale cratering in rock proposed previously by others have assumed that the crater radius is proportional to powers of the impactor energy and gravity, with no additional dependence on impact velocity. The size scaling laws determined from ongoing centrifuge experiments differ from earlier ones in three respects. First, a distinct dependence of impact velocity is recognized, even for constant impactor energy. Second, the present energy exponent for low porosity targets, like competent rock, is lower than earlier estimates. Third, the gravity exponent is recognized here as being related to both the energy and the velocity exponents.
Liao, Chenlong; Yang, Min; Liu, Pengfei; Zhong, Wenxiang; Zhang, Wenchuan
2018-05-01
Preclinical studies involving animal models are essential for understanding the underlying mechanisms of diabetic neuropathic pain. Rats were divided into four groups: two controls and two experimental. Diabetes mellitus was induced by streptozotocin (STZ) injection in two experimental groups. The first group involved one sham operation. The second group involved one latex tube encircling the sciatic nerve. The vehicle-injection rats were used as two corresponding control groups: sham operation and encircled nerves. By the third week, STZ-injected rats with encircled nerves were further divided into three subgroups: one involving continuing observation and the other two involving decompression (removal of the latex tube) at different time points (third week and fifth week). Weight and blood glucose were monitored, and behavioral analysis, including paw withdrawal threshold (PWT) and latency, was performed every week during the experimental period (7 weeks). Hyperglycemia was induced in all STZ-injected rats. A significant increase in weight was observed in the control groups when compared with the experimental groups. By the third week, more STZ-injected rats with encircled nerves developed mechanical allodynia than those without ( P < 0.05), while no significant difference was noted ( P > 0.05) on the incidence of thermal hyperalgesia. Mechanical allodynia, but not thermal hyperalgesia, could be ameliorated by the removal of the latex tube at an early stage (third week). With the combined use of a latex tube and STZ injection, a stable rat model of painful diabetic peripheral neuropathy (DPN) manifesting both thermal hyperalgesia and mechanical allodynia has been established. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Breimer, Gerben E; Haji, Faizal A; Bodani, Vivek; Cunningham, Melissa S; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M
2017-02-01
The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." To compare and identify the relative utility of a physical and VR ETV simulation model for use in neurosurgical training. Twenty-three neurosurgical residents and 3 fellows performed an ETV on both a physical and VR simulation model. Trainees rated the models using 5-point Likert scales evaluating the domains of anatomy, instrument handling, procedural content, and the overall fidelity of the simulation. Paired t tests were performed for each domain's mean overall score and individual items. The VR model has relative benefits compared with the physical model with respect to realistic representation of intraventricular anatomy at the foramen of Monro (4.5, standard deviation [SD] = 0.7 vs 4.1, SD = 0.6; P = .04) and the third ventricle floor (4.4, SD = 0.6 vs 4.0, SD = 0.9; P = .03), although the overall anatomy score was similar (4.2, SD = 0.6 vs 4.0, SD = 0.6; P = .11). For overall instrument handling and procedural content, the physical simulator outperformed the VR model (3.7, SD = 0.8 vs 4.5; SD = 0.5, P < .001 and 3.9; SD = 0.8 vs 4.2, SD = 0.6; P = .02, respectively). Overall task fidelity across the 2 simulators was not perceived as significantly different. Simulation model selection should be based on educational objectives. Training focused on learning anatomy or decision-making for anatomic cues may be aided with the VR simulation model. A focus on developing manual dexterity and technical skills using endoscopic equipment in the operating room may be better learned on the physical simulation model. Copyright © 2016 by the Congress of Neurological Surgeons
Yoshimatsu, Katsunori
2012-06-01
The four-fifths law for third-order longitudinal moments is examined, using direct numerical simulation (DNS) data on three-dimensional (3D) forced incompressible magnetohydrodynamic (MHD) turbulence without a uniformly imposed magnetic field in a periodic box. The magnetic Prandtl number is set to one, and the number of grid points is 512(3). A generalized Kármán-Howarth-Kolmogorov equation for second-order velocity moments in isotropic MHD turbulence is extended to anisotropic MHD turbulence by means of a spherical average over the direction of r. Here, r is a separation vector. The viscous, forcing, anisotropic and nonstationary terms in the generalized equation are quantified. It is found that the influence of the anisotropic terms on the four-fifths law is negligible at small scales, compared to that of the viscous term. However, the influence of the directional anisotropy, which is measured by the departure of the third-order moments in a particular direction of r from the spherically averaged ones, on the four-fifths law is suggested to be substantial, at least in the case studied here.
Mesoscale Models of Fluid Dynamics
NASA Astrophysics Data System (ADS)
Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.
During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.
The EMEP MSC-W chemical transport model - technical description
NASA Astrophysics Data System (ADS)
Simpson, D.; Benedictow, A.; Berge, H.; Bergström, R.; Emberson, L. D.; Fagerli, H.; Flechard, C. R.; Hayman, G. D.; Gauss, M.; Jonson, J. E.; Jenkin, M. E.; Nyíri, A.; Richter, C.; Semeena, V. S.; Tsyro, S.; Tuovinen, J.-P.; Valdebenito, Á.; Wind, P.
2012-08-01
The Meteorological Synthesizing Centre-West (MSC-W) of the European Monitoring and Evaluation Programme (EMEP) has been performing model calculations in support of the Convention on Long Range Transboundary Air Pollution (CLRTAP) for more than 30 years. The EMEP MSC-W chemical transport model is still one of the key tools within European air pollution policy assessments. Traditionally, the model has covered all of Europe with a resolution of about 50 km × 50 km, and extending vertically from ground level to the tropopause (100 hPa). The model has changed extensively over the last ten years, however, with flexible processing of chemical schemes, meteorological inputs, and with nesting capability: the code is now applied on scales ranging from local (ca. 5 km grid size) to global (with 1 degree resolution). The model is used to simulate photo-oxidants and both inorganic and organic aerosols. In 2008 the EMEP model was released for the first time as public domain code, along with all required input data for model runs for one year. The second release of the EMEP MSC-W model became available in mid 2011, and a new release is targeted for summer 2012. This publication is intended to document this third release of the EMEP MSC-W model. The model formulations are given, along with details of input data-sets which are used, and a brief background on some of the choices made in the formulation is presented. The model code itself is available at www.emep.int, along with the data required to run for a full year over Europe.
242-16H 2H EVAPORATOR POT SAMPLING FINAL REPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krementz, D; William Cheng, W
2008-06-11
Due to the materials that are processed through 2H Evaporator, scale is constantly being deposited on the surfaces of the evaporator pot. In order to meet the requirements of the Nuclear Criticality Safety Analysis/Evaluation (NCSA/NCSE) for 2H Evaporator, inspections of the pot are performed to determine the extent of scaling. Once the volume of scale reaches a certain threshold, the pot must be chemically cleaned to remove the scale. Prior to cleaning the pot, samples of the scale are obtained to determine the concentration of uranium and plutonium and also to provide information to assist with pot cleaning. Savannah Rivermore » National Laboratory (SRNL) was requested by Liquid Waste Organization (LWO) Engineering to obtain these samples from two locations within the evaporator. Past experience has proven the difficulty of successfully obtaining solids samples from the 2H Evaporator pot. To mitigate this risk, a total of four samplers were designed and fabricated to ensure that two samples could be obtained. Samples had previously been obtained from the cone surface directly below the vertical access riser using a custom scraping tool. This tool was fabricated and deployed successfully. A second scraper was designed to obtain sample from the nearby vertical thermowell and a third scraper was designed to obtain sample from the vertical pot wall. The newly developed scrapers both employed a pneumatically actuated elbow. The scrapers were designed to be easily attached/removed from the elbow assembly. These tools were fabricated and deployed successfully. A fourth tool was designed to obtain sample from the opposite side of the pot under the tube bundle. This tool was fabricated and tested, but the additional modifications required to make the tool field-ready could not be complete in time to meet the aggressive deployment schedule. Two samples were obtained near the pot entry location, one from the pot wall and the other from the evaporator feed pipe. Since a third sampler was available and all of the radiological controls were in place, the decision was made to obtain a third sample. The third sampler dropped directly below the riser to obtain a scrape sample from the evaporator cone. Samples were obtained from all of these locations in sufficient quantities to perform the required analysis.« less
Clay, Water, and Salt: Controls on the Permeability of Fine-Grained Sedimentary Rocks.
Bourg, Ian C; Ajo-Franklin, Jonathan B
2017-09-19
The ability to predict the permeability of fine-grained soils, sediments, and sedimentary rocks is a fundamental challenge in the geosciences with potentially transformative implications in subsurface hydrology. In particular, fine-grained sedimentary rocks (shale, mudstone) constitute about two-thirds of the sedimentary rock mass and play important roles in three energy technologies: petroleum geology, geologic carbon sequestration, and radioactive waste management. The problem is a challenging one that requires understanding the properties of complex natural porous media on several length scales. One inherent length scale, referred to hereafter as the mesoscale, is associated with the assemblages of large grains of quartz, feldspar, and carbonates over distances of tens of micrometers. Its importance is highlighted by the existence of a threshold in the core scale mechanical properties and regional scale energy uses of shale formations at a clay content X clay ≈ 1/3, as predicted by an ideal packing model where a fine-grained clay matrix fills the gaps between the larger grains. A second important length scale, referred to hereafter as the nanoscale, is associated with the aggregation and swelling of clay particles (in particular, smectite clay minerals) over distances of tens of nanometers. Mesoscale phenomena that influence permeability are primarily mechanical and include, for example, the ability of contacts between large grains to prevent the compaction of the clay matrix. Nanoscale phenomena that influence permeability tend to be chemomechanical in nature, because they involve strong impacts of aqueous chemistry on clay swelling. The second length scale remains much less well characterized than the first, because of the inherent challenges associated with the study of strongly coupled nanoscale phenomena. Advanced models of the nanoscale properties of fine-grained media rely predominantly on the Derjaguin-Landau-Verwey-Overbeek (DLVO) theory, a mean field theory of colloidal interactions that accurately predicts clay swelling in a narrow range of conditions (low salinity, low compaction, Na + counterion). An important feature of clay swelling that is not predicted by these models is the coexistence, in most conditions of aqueous chemistry and dry bulk density, of two types of pores between parallel smectite particles: mesopores with a pore width of >3 nm that are controlled by long-range interactions (the osmotic swelling regime) and nanopores with a pore width <1 nm that are controlled by short-range interactions (the crystalline swelling regime). Nanogeochemical characterization and simulation techniques, including coarse-grained and all-atom molecular dynamics simulations, hold significant promise for the development of advanced constitutive relations that predict this coexistence and its dependence on aqueous chemistry.
Khachatryan, Vardan
2015-06-17
We searched for third-generation squarks in fully hadronic final states and presented them using data samples corresponding to integrated luminosities of 19.4 or 19.7 fb -1, collected at a centre-of-mass energy of 8 TeV with the CMS detector at the LHC. Three mutually exclusive searches are presented, each optimized for a different decay topology. They include a multijet search requiring one fully reconstructed top quark, a dijet search requiring one or two jets originating from b quarks, and a monojet search. Furthermore, no excesses above the standard model expectations are seen, and limits are set on top and bottom squarkmore » production in the context of simplified models of supersymmetry.« less
ERIC Educational Resources Information Center
Christensen, Bruce K.; Girard, Todd A.; Bagby, R. Michael
2007-01-01
An eight-subtest short form (SF8) of the Wechsler Adult Intelligence Scale, Third Edition (WAIS-III), maintaining equal representation of each index factor, was developed for use with psychiatric populations. Data were collected from a mixed inpatient/outpatient sample (99 men and 101 women) referred for neuropsychological assessment. Psychometric…
Local Navajo Norms for the Wechsler Intelligence Scale for Children: Third Edition.
ERIC Educational Resources Information Center
Tempest, Phyllis
1998-01-01
A project developed Navajo norms for the Wechsler Intelligence Scale for Children, Third Edition (WISC-III). Urban Navajo students and those who were proficient in English had higher WISC-III verbal scores than rural Navajo students and those who were functional in English. English-language proficiency did not affect scores on nonverbal…
A stochastic differential equation model for the foraging behavior of fish schools.
Tạ, Tôn Việt; Nguyen, Linh Thi Hoai
2018-03-15
Constructing models of living organisms locating food sources has important implications for understanding animal behavior and for the development of distribution technologies. This paper presents a novel simple model of stochastic differential equations for the foraging behavior of fish schools in a space including obstacles. The model is studied numerically. Three configurations of space with various food locations are considered. In the first configuration, fish swim in free but limited space. All individuals can find food with large probability while keeping their school structure. In the second and third configurations, they move in limited space with one and two obstacles, respectively. Our results reveal that the probability of foraging success is highest in the first configuration, and smallest in the third one. Furthermore, when school size increases up to an optimal value, the probability of foraging success tends to increase. When it exceeds an optimal value, the probability tends to decrease. The results agree with experimental observations.
A stochastic differential equation model for the foraging behavior of fish schools
NASA Astrophysics Data System (ADS)
Tạ, Tôn ệt, Vi; Hoai Nguyen, Linh Thi
2018-05-01
Constructing models of living organisms locating food sources has important implications for understanding animal behavior and for the development of distribution technologies. This paper presents a novel simple model of stochastic differential equations for the foraging behavior of fish schools in a space including obstacles. The model is studied numerically. Three configurations of space with various food locations are considered. In the first configuration, fish swim in free but limited space. All individuals can find food with large probability while keeping their school structure. In the second and third configurations, they move in limited space with one and two obstacles, respectively. Our results reveal that the probability of foraging success is highest in the first configuration, and smallest in the third one. Furthermore, when school size increases up to an optimal value, the probability of foraging success tends to increase. When it exceeds an optimal value, the probability tends to decrease. The results agree with experimental observations.
NASA Astrophysics Data System (ADS)
von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter
2016-04-01
Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as illustrated with lab-scale and large-scale experiments. A large-scale natural landslide event down a curved channel is presented to show the model performance at such a scale, calibrated based on the observed surface super-elevation.
Computing chemical organizations in biological networks.
Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter
2008-07-15
Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolinger, Mark; Weaver, Samantha; Zuboy, Jarett
Recently announced low-priced power purchase agreements (PPAs) for US utility-scale photovoltaic (PV) projects suggest $50/MWh solar might be viable under certain conditions. To explore this possibility, this paper draws on an increasing wealth of empirical data to analyze trends in three of the most important PPA price drivers: upfront installed project prices, operations, and maintenance (O&M) costs, and capacity factors. Average installed prices among a sample of utility-scale PV projects declined by more than one third (from 5.8/W AC to 3.7/WAC) from the 2007–2009 period through 2013, even as costlier systems with crystalline-silicon modules, sun tracking, and higher inverter loadingmore » ratios (ILRs) have constituted an increasing proportion of total utility-scale PV capacity (all values shown here are in 2013 dollars). Actual and projected O&M costs from a very small sample of projects appear to range from $20–$40/kW AC-year. Furthermore, the average net capacity factor is 30% for projects installed in 2012, up from 24% for projects installed in 2010, owing to better solar resources, higher ILRs, and greater use of tracking among the more recent projects. Based on these trends, a pro-forma financial model suggests that $50/MWh utility-scale PV is achievable using a combination of aggressive-but-achievable technical and financial input parameters (including receipt of the 30% federal investment tax credit). Although the US utility-scale PV market is still young, the rapid progress in the key metrics documented in this paper has made PV a viable competitor against other utility-scale renewable generators, and even conventional peaking generators, in certain regions of the country.« less
Bolinger, Mark; Weaver, Samantha; Zuboy, Jarett
2015-05-22
Recently announced low-priced power purchase agreements (PPAs) for US utility-scale photovoltaic (PV) projects suggest $50/MWh solar might be viable under certain conditions. To explore this possibility, this paper draws on an increasing wealth of empirical data to analyze trends in three of the most important PPA price drivers: upfront installed project prices, operations, and maintenance (O&M) costs, and capacity factors. Average installed prices among a sample of utility-scale PV projects declined by more than one third (from 5.8/W AC to 3.7/WAC) from the 2007–2009 period through 2013, even as costlier systems with crystalline-silicon modules, sun tracking, and higher inverter loadingmore » ratios (ILRs) have constituted an increasing proportion of total utility-scale PV capacity (all values shown here are in 2013 dollars). Actual and projected O&M costs from a very small sample of projects appear to range from $20–$40/kW AC-year. Furthermore, the average net capacity factor is 30% for projects installed in 2012, up from 24% for projects installed in 2010, owing to better solar resources, higher ILRs, and greater use of tracking among the more recent projects. Based on these trends, a pro-forma financial model suggests that $50/MWh utility-scale PV is achievable using a combination of aggressive-but-achievable technical and financial input parameters (including receipt of the 30% federal investment tax credit). Although the US utility-scale PV market is still young, the rapid progress in the key metrics documented in this paper has made PV a viable competitor against other utility-scale renewable generators, and even conventional peaking generators, in certain regions of the country.« less
White, Casey B; Kumagai, Arno K; Ross, Paula T; Fantone, Joseph C
2009-05-01
The third-year students at one medical school told the authors that values core to patient-centered care were impossible to practice in clerkships, in a culture where supervisors role modeled behaviors in direct conflict with patient-centered care. As they developed a new medical student curriculum, the authors designed the Family Centered Experience (FCE) to help students achieve developmental goals and understand the importance of and provide a foundation for patient-centered care. The authors solicited members of the first cohort to complete the FCE (the class of 2007) to participate in this focus-group-based study halfway through the third year. They explored the influence of the FCE on students' experiences in the third-year clerkships, and how conflicts between the two learning experiences shaped these students' values and behaviors. Students reported that during clerkships they experienced strong feelings of powerlessness and conflict between what they had learned about patient-centered care in the first two years and what they saw role modeled in the third year. Based on students' comments, the authors categorized students into one of three groups: those whose patient-centered values were maintained, compromised, or transformed. Students revealed that their conflict was connected to feelings of powerlessness, along with exacerbating factors including limited time, concerns about expectations for their behavior, and pessimism about change. Role modeling had a significant influence on consequences related to students' patient-centered values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Claire L.; Bond-Lamberty, Ben; Desai, Ankur R.
A recent acceleration of model-data synthesis activities has leveraged many terrestrial carbon (C) datasets, but utilization of soil respiration (RS) data has not kept pace with other types such as eddy covariance (EC) fluxes and soil C stocks. Here we argue that RS data, including non-continuous measurements from survey sampling campaigns, have unrealized value and should be utilized more extensively and creatively in data synthesis and modeling activities. We identify three major challenges in interpreting RS data, and discuss opportunities to address them. The first challenge is that when RS is compared to ecosystem respiration (RECO) measured from EC towers,more » it is not uncommon to find substantial mismatch, indicating one or both flux methodologies are unreliable. We argue the most likely cause of mismatch is unreliable EC data, and there is an unrecognized opportunity to utilize RS for EC quality control. The second challenge is that RS integrates belowground heterotrophic (RH) and autotrophic (RA) activity, whereas modelers generally prefer partitioned fluxes, and few models include an explicit RS output. Opportunities exist to use the total RS flux for data assimilation and model benchmarking methods rather than less-certain partitioned fluxes. Pushing for more experiments that not only partition RS but also monitor the age of RA and RH, as well as for the development of belowground RA components in models, would allow for more direct comparison between measured and modeled values. The third challenge is that soil respiration is generally measured at a very different resolution than that needed for comparison to EC or ecosystem- to global-scale models. Measuring soil fluxes with finer spatial resolution and more extensive coverage, and downscaling EC fluxes to match the scale of RS, will improve chamber and tower comparisons. Opportunities also exist to estimate RH at regional scales by implementing decomposition functional types, akin to plant functional types. We conclude by discussing the benefits that wide use of RS data will bring to model development, and database developments that will make RS data more robust, useful, and broadly available to the research community.« less
Tubbs-Cooley, Heather L; Mara, Constance A; Carle, Adam C; Gurses, Ayse P
2018-02-12
The NASA Task Load Index (NASA-TLX) is a subjective workload assessment scale developed for use in aviation and increasingly applied to healthcare. The scale purports to measure overall workload as a single variable calculated by summing responses to six items. Since no data address the validity of this scoring approach in health care, we evaluated the single factor structure of the NASA-TLX as a measure of overall workload among intenisive care nurses. Confirmatory factor analysis of data from two studies of nurse workload in neonatal, paediatric, and adult intensive care units. Study 1 data were obtained from 136 nurses in one neonatal intensive care unit. Study 2 data were collected from 300 nurses in 17 adult, paediatric and neonatal units. Nurses rated their workload using the NASA-TLX's paper version. A single factor model testing whether all six items measured a single overall workload variable fit least well (RMSEA = 0.14; CFI = 0.91; TLI = 0.85). A second model that specified two items as outcomes of overall workload had acceptable fit (RMSEA = 0.08; CFI = 0.97; TLI = 0.95) while a third model of four items fit best (RMSEA = 0.06; CFI > 0.99; TLI = 0.99). A summed score from four of six NASA-TLX items appears to most reliably measure a single overall workload variable among intensive care nurses. Copyright © 2018 Elsevier Ltd. All rights reserved.
Altered perceptual sensitivity to kinematic invariants in Parkinson's disease.
Dayan, Eran; Inzelberg, Rivka; Flash, Tamar
2012-01-01
Ample evidence exists for coupling between action and perception in neurologically healthy individuals, yet the precise nature of the internal representations shared between these domains remains unclear. One experimentally derived view is that the invariant properties and constraints characterizing movement generation are also manifested during motion perception. One prominent motor invariant is the "two-third power law," describing the strong relation between the kinematics of motion and the geometrical features of the path followed by the hand during planar drawing movements. The two-thirds power law not only characterizes various movement generation tasks but also seems to constrain visual perception of motion. The present study aimed to assess whether motor invariants, such as the two thirds power law also constrain motion perception in patients with Parkinson's disease (PD). Patients with PD and age-matched controls were asked to observe the movement of a light spot rotating on an elliptical path and to modify its velocity until it appeared to move most uniformly. As in previous reports controls tended to choose those movements close to obeying the two-thirds power law as most uniform. Patients with PD displayed a more variable behavior, choosing on average, movements closer but not equal to a constant velocity. Our results thus demonstrate impairments in how the two-thirds power law constrains motion perception in patients with PD, where this relationship between velocity and curvature appears to be preserved but scaled down. Recent hypotheses on the role of the basal ganglia in motor timing may explain these irregularities. Alternatively, these impairments in perception of movement may reflect similar deficits in motor production.
NASA Astrophysics Data System (ADS)
Dickson, N.
2009-12-01
The global observation, assimilation and prediction in numerical models of ice super-saturated (ISS) regions (ISSR) are crucial if the climate impact of aircraft condensations trails (contrails) is to be fully understood, and if, for example, contrail formation is to be avoided through aircraft operational measures. A robust assessment of the global distribution of ISSR will further this debate, and ISS event occurrence, frequency and spatial scales have recently attracted significant attention. The mean horizontal size of ISSR is 150 km (±250km) although 12-14% of ISS events occur on horizontal scales of less than 5km. The average vertical thickness of ISS layers is 600-800m (±575m) but layers ranging from 25m to 3000m have been observed, with up to one third of ISS layers thought to be less than 100m deep. Given their small scales compared to typical atmospheric model grid sizes, statistical representations of the spatial scales of ISSR are required, in both horizontal and vertical dimensions, if global occurrence of ISSR is to be adequately represented in climate models. This paper uses radiosonde launches made by the UK Meteorological Office, from the British Isles, Gibraltar, St. Helena and the Falkland Islands between January 2002 and December 2006, to investigate the probabilistic occurrence of ISSR. Specifically each radiosonde profile is divided into 50 and 100 hPa pressure layers, to emulate the coarse vertical resolution of some atmospheric models. Then the high resolution observations contained within each thick pressure layer are used to calculate an average relative humidity and an ISS fraction for each individual thick pressure layer. These relative humidity pressure layer descriptions are then linked through a probability function to produce an s-shaped curve describing the ISS fraction in any average relative humidity pressure layer. An empirical investigation has shown that this one curve is statistically valid for mid-latitude locations, irrespective of season and altitude, however, pressure layer depth is an important variable. Using this empirical understanding of the s-shaped relationship a mathematical model was developed to represent the ISS fraction within any arbitrary thick pressure layer. Here the statistical distributions of actual high resolution RHi observations in any thick pressure layer, along with an error function, are used to mathematically describe the s-shape. Two models were developed to represent both 50 and 100 hPa pressure layers with each reconstructing their respective s-shapes within 8-10% of the empirical curves. These new models can be used, to represent the small scale structures of ISS events, in modelled data where only low vertical resolution is available. This will be useful in understanding, and improving the global distribution, both observed and forecasted, of ice super-saturation.
On the Interactions Between Planetary and Mesoscale Dynamics in the Oceans
NASA Astrophysics Data System (ADS)
Grooms, I.; Julien, K. A.; Fox-Kemper, B.
2011-12-01
Multiple-scales asymptotic methods are used to investigate the interaction of planetary and mesoscale dynamics in the oceans. We find three regimes. In the first, the slow, large-scale planetary flow sets up a baroclinically unstable background which leads to vigorous mesoscale eddy generation, but the eddy dynamics do not affect the planetary dynamics. In the second, the planetary flow feels the effects of the eddies, but appears to be unable to generate them. The first two regimes rely on horizontally isotropic large-scale dynamics. In the third regime, large-scale anisotropy, as exists for example in the Antarctic Circumpolar Current and in western boundary currents, allows the large-scale dynamics to both generate and respond to mesoscale eddies. We also discuss how the investigation may be brought to bear on the problem of parameterization of unresolved mesoscale dynamics in ocean general circulation models.
NASA Astrophysics Data System (ADS)
Donkov, Sava; Stefanov, Ivan Z.
2018-03-01
We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.
Sun, Yang; Shevell, Steven K
2008-01-01
The mother or daughter of a male with an X-chromosome-linked red/green color defect is an obligate carrier of the color deficient gene array. According to the Lyonization hypothesis, a female carrier's defective gene is expressed and thus carriers may have more than two types of pigments in the L/M photopigment range. An open question is how a carrier's third cone pigment in the L/M range affects the postreceptoral neural signals encoding color. Here, a model considered how the signal from the third pigment pools with signals from the normal's two pigments in the L/M range. Three alternative assumptions were considered for the signal from the third cone pigment: it pools with the signal from (1) L cones, (2) M cones, or (3) both types of cones. Spectral-sensitivity peak, optical density, and the relative number of each cone type were factors in the model. The model showed that differences in Rayleigh matches among carriers can be due to individual differences in the number of the third type of L/M cone, and the spectral sensitivity peak and optical density of the third L/M pigment; surprisingly, however, individual differences in the cone ratio of the other two cone types (one L and the other M) did not affect the match. The predicted matches were compared to Schmidt's (1934/1955) report of carriers' Rayleigh matches. For carriers of either protanomaly or deuteranomaly, these matches were not consistent with the signal from the third L/M pigment combining with only the signal from M cones. The matches could be accounted for by pooling the third-pigment's response with L-cone signals, either exclusively or randomly with M-cone responses as well.
NASA Astrophysics Data System (ADS)
Dickson, N. C.; Gierens, K. M.; Rogers, H. L.; Jones, R. L.
2010-02-01
The global observation, assimilation and prediction in numerical models of ice super-saturated (ISS) regions (ISSR) are crucial if the climate impact of aircraft condensations trails (contrails) is to be fully understood, and if, for example, contrail formation is to be avoided through aircraft operational measures. A robust assessment of the global distribution of ISSR will further this debate, and ISS event occurrence, frequency and spatial scales have recently attracted significant attention. The mean horizontal path length through ISSR as observed by MOZAIC aircraft is 150 km (±250 km). The average vertical thickness of ISS layers is 600-800 m (±575 m) but layers ranging from 25 m to 3000 m have been observed, with up to one third of ISS layers thought to be less than 100 m deep. Given their small scales compared to typical atmospheric model grid sizes, statistical representations of the spatial scales of ISSR are required, in both horizontal and vertical dimensions, if global occurrence of ISSR is to be adequately represented in climate models. This paper uses radiosonde launches made by the UK Meteorological Office, from the British Isles, Gibraltar, St. Helena and the Falkland Islands between January 2002 and December 2006, to investigate the probabilistic occurrence of ISSR. Specifically each radiosonde profile is divided into 50- and 100-hPa pressure layers, to emulate the coarse vertical resolution of some atmospheric models. Then the high resolution observations contained within each thick pressure layer are used to calculate an average relative humidity and an ISS fraction for each individual thick pressure layer. These relative humidity pressure layer descriptions are then linked through a probability function to produce an s-shaped curve describing the ISS fraction in any average relative humidity pressure layer. An empirical investigation has shown that this one curve is statistically valid for mid-latitude locations, irrespective of season and altitude, however, pressure layer depth is an important variable. Using this empirical understanding of the s-shaped relationship a mathematical model was developed to represent the ISS fraction within any arbitrary thick pressure layer. Here the statistical distributions of actual high resolution RHi observations in any thick pressure layer, along with an error function, are used to mathematically describe the s-shape. Two models were developed to represent both 50- and 100-hPa pressure layers with each reconstructing their respective s-shapes within 8-10% of the empirical curves. These new models can be used, to represent the small scale structures of ISS events, in modelled data where only low vertical resolution is available. This will be useful in understanding, and improving the global distribution, both observed and forecasted, of ice super-saturation.
NASA Technical Reports Server (NTRS)
Foreman, J. W., Jr.; Cardone, J. M.
1973-01-01
The mathematical design of the aspheric third mirror for the three-mirror X-ray telescope (TMXRT) is presented, along with the imaging characteristics of the telescope obtained by a ray trace analysis. The present design effort has been directed entirely toward obtaining an aspheric third mirror which will be compatible with existing S-056 paraboloidal-hyperboloidal mirrors. This compatability will facilitate the construction of a prototype model of the TMXRT, since it will only be necessary to fabricate one new mirror in order to obtain a working model.
The More the Merrier?. Entropy and Statistics of Asexual Reproduction in Freshwater Planarians
NASA Astrophysics Data System (ADS)
Quinodoz, Sofia; Thomas, Michael A.; Dunkel, Jörn; Schötz, Eva-Maria
2011-04-01
The trade-off between traits in life-history strategies has been widely studied for sexual and parthenogenetic organisms, but relatively little is known about the reproduction strategies of asexual animals. Here, we investigate clonal reproduction in the freshwater planarian Schmidtea mediterranea, an important model organism for regeneration and stem cell research. We find that these flatworms adopt a randomized reproduction strategy that comprises both asymmetric binary fission and fragmentation (generation of multiple offspring during a reproduction cycle). Fragmentation in planarians has primarily been regarded as an abnormal behavior in the past; using a large-scale experimental approach, we now show that about one third of the reproduction events in S. mediterranea are fragmentations, implying that fragmentation is part of their normal reproductive behavior. Our analysis further suggests that certain characteristic aspects of the reproduction statistics can be explained in terms of a maximum relative entropy principle.
The Delicate Analysis of Short-Term Load Forecasting
NASA Astrophysics Data System (ADS)
Song, Changwei; Zheng, Yuan
2017-05-01
This paper proposes a new method for short-term load forecasting based on the similar day method, correlation coefficient and Fast Fourier Transform (FFT) to achieve the precision analysis of load variation from three aspects (typical day, correlation coefficient, spectral analysis) and three dimensions (time dimension, industry dimensions, the main factors influencing the load characteristic such as national policies, regional economic, holidays, electricity and so on). First, the branch algorithm one-class-SVM is adopted to selection the typical day. Second, correlation coefficient method is used to obtain the direction and strength of the linear relationship between two random variables, which can reflect the influence caused by the customer macro policy and the scale of production to the electricity price. Third, Fourier transform residual error correction model is proposed to reflect the nature of load extracting from the residual error. Finally, simulation result indicates the validity and engineering practicability of the proposed method.
Adaptive evolution of complex innovations through stepwise metabolic niche expansion.
Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A; Lercher, Martin J; Pál, Csaba; Papp, Balázs
2016-05-20
A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes.
Adaptive evolution of complex innovations through stepwise metabolic niche expansion
Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A.; Lercher, Martin J.; Pál, Csaba; Papp, Balázs
2016-01-01
A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes. PMID:27197754
Neath, Ian; Saint-Aubin, Jean
2011-06-01
The serial position function, with its characteristic primacy and recency effects, is one of the most ubiquitous findings in episodic memory tasks. In contrast, there are only two demonstrations of such functions in tasks thought to tap semantic memory. Here, we provide a third demonstration, showing that free recall of the prime ministers of Canada also results in a serial position function. Scale Independent Memory, Perception, and Learning (SIMPLE), a local distinctiveness model of memory that was designed to account for serial position effects in episodic memory, fit the data. According to SIMPLE, serial position functions observed in episodic and semantic memory all reflect the relative distinctiveness principle: items will be well remembered to the extent that they are more distinct than competing items at the time of retrieval. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Stoll, F.; Tremback, J. W.; Arnaiz, H. H.
1979-01-01
A study was performed to determine the effects of the number and position of total pressure probes on the calculation of five compressor face distortion descriptors. This study used three sets of 320 steady state total pressure measurements that were obtained with a special rotating rake apparatus in wind tunnel tests of a mixed-compression inlet. The inlet was a one third scale model of the inlet on a YF-12 airplane, and it was tested in the wind tunnel at representative flight conditions at Mach numbers above 2.0. The study shows that large errors resulted in the calculation of the distortion descriptors even with a number of probes that were considered adequate in the past. There were errors as large as 30 and -50 percent in several distortion descriptors for a configuration consisting of eight rakes with five equal-area-weighted probes on each rake.
McCrary, Hilary C; Krate, Jonida; Savilo, Christine E; Tran, Melissa H; Ho, Hang T; Adamas-Rappaport, William J; Viscusi, Rebecca K
2016-11-01
The aim of our study was to determine if a fresh cadaver model is a viable method for teaching ultrasound (US)-guided breast biopsy of palpable breast lesions. Third-year medical students were assessed both preinstruction and postinstruction on their ability to perform US-guided needle aspiration or biopsy of artificially created masses using a 10-item checklist. Forty-one third-year medical students completed the cadaver laboratory as part of the surgery clerkship. Eight items on the checklist were found to be significantly different between pre-testing and post-testing. The mean preinstruction score was 2.4, whereas the mean postinstruction score was 7.10 (P < .001). Fresh cadaver models have been widely used in medical education. However, there are few fresh cadaver models that provide instruction on procedures done in the outpatient setting. Our model was found to be an effective method for the instruction of US-guided breast biopsy among medical students. Copyright © 2016 Elsevier Inc. All rights reserved.
Marsteller, Sara J; Zolotova, Natalya; Knudson, Kelly J
2017-02-01
Hypothetical models of socioeconomic organization in pre-Columbian societies generated from the rich ethnohistoric record in the New World require testing against the archaeological and bioarchaeological record. Here, we test ethnohistorian Maria Rostworowski's horizontality model of socioeconomic specialization for the Central Andean coast by reconstructing dietary practices in the Late Intermediate Period (c. AD 900-1470) Ychsma polity to evaluate complexities in social behaviors prior to Inka imperial influence. Stable carbon and nitrogen isotope analysis of archaeological human bone collagen and apatite (δ 13 C col[VPDB], δ 15 N col[AIR] , δ 13 C ap[VPDB] ) and locally available foods is used to reconstruct the diets of individuals from Armatambo (n = 67), associated ethnohistorically with fishing, and Rinconada Alta (n = 46), associated ethnohistorically with agriculture. Overall, mean δ 15 N col[AIR] is significantly greater at Armatambo, while mean δ 13 C col[VPDB] and mean δ 13 C ap[VPDB] are not significantly different between the two sites. Within large-scale trends, adult mean δ 13 C ap[VPDB] is significantly greater at Armatambo. In addition, nearly one-third of Armatambo adults and adolescents show divergent δ 15 N col[AIR] values. These results indicate greater reliance on marine resources at Armatambo versus Rinconada Alta, supporting the ethnohistoric model of socioeconomic specialization for the Central Andean coast. Deviations from large-scale dietary trends suggest complexities not accounted for by the ethnohistoric model, including intra-community subsistence specialization and/or variation in resource access. © 2016 Wiley Periodicals, Inc.
Carbon Nanotubules: Building Blocks for Nanometer-Scale Engineering
NASA Technical Reports Server (NTRS)
Sinnott, Susan B.
1999-01-01
The proposed work consisted of two projects: the investigation of fluid permeation and diffusion through ultrafiltration membranes composed of carbon nanotubules and the design and study of molecular transistors composed of nanotubules. The progress made on each project is summarized and also discussion about additional projects, one of which is a continuation of work supported by another grant, is included. The first project was Liquid Interactions within a Nanotubule Membrane. The second was the design of nanometer-scale hydrocarbon electronic devices. The third was the investigation of Mechanical properties of Nanotubules and Nanotubule bundles. The fourth project was to investigate the growth mechanisms of Carbon Nanotubules.
Phase behavior of the modified-Yukawa fluid and its sticky limit.
Schöll-Paschinger, Elisabeth; Valadez-Pérez, Néstor E; Benavides, Ana L; Castañeda-Priego, Ramón
2013-11-14
Simple model systems with short-range attractive potentials have turned out to play a crucial role in determining theoretically the phase behavior of proteins or colloids. However, as pointed out by D. Gazzillo [J. Chem. Phys. 134, 124504 (2011)], one of these widely used model potentials, namely, the attractive hard-core Yukawa potential, shows an unphysical behavior when one approaches its sticky limit, since the second virial coefficient is diverging. However, it is exactly this second virial coefficient that is typically used to depict the experimental phase diagram for a large variety of complex fluids and that, in addition, plays an important role in the Noro-Frenkel scaling law [J. Chem. Phys. 113, 2941 (2000)], which is thus not applicable to the Yukawa fluid. To overcome this deficiency of the attractive Yukawa potential, D. Gazzillo has proposed the so-called modified hard-core attractive Yukawa fluid, which allows one to correctly obtain the second and third virial coefficients of adhesive hard-spheres starting from a system with an attractive logarithmic Yukawa-like interaction. In this work we present liquid-vapor coexistence curves for this system and investigate its behavior close to the sticky limit. Results have been obtained with the self-consistent Ornstein-Zernike approximation (SCOZA) for values of the reduced inverse screening length parameter up to 18. The accuracy of SCOZA has been assessed by comparison with Monte Carlo simulations.
Modelling soil water repellency at the daily scale in Portuguese burnt and unburnt eucalypt stands
NASA Astrophysics Data System (ADS)
Nunes, João Pedro; van der Slik, Bart; Marisa Santos, Juliana; Malvar Cortizo, Maruxa; Keizer, Jan Jacob
2014-05-01
Soil water repellency can impact soil hydrology, especially soil wetting. This creates a challenge for hydrological modelling in repellency-prone regions, since current models are generally unable to take it into account. This communication focuses on the development and evaluation of a daily water balance model which takes repellency into account, adapted for eucalypt forest plantations in the north-western Iberian Peninsula. The model was developed and tested using data from three eucalypt stands. Two were burnt in 2005, and the data included bi-weekly measurements of soil moisture and water repellency along a transect, during two years. The third was not burnt, and the data included both weekly measurements of soil water repellency and soil moisture along transects, and continuous measurements of soil moisture at one point, performed for one year between 2011 and 2012. All sites showed low repellency during the wet winter season (although less in the unburnt site, as the winter of 2011/12 was comparatively dry) and high repellency during the dry summer season; this seasonal pattern was strongly related with soil moisture fluctuations. The water balance model was based on the Thornthwaite-Mather method. Interception and tree potential evapotranspiration were estimated using satellite imagery (MODIS NDVI), the first by estimating LAI and applying the Gash interception model, and the second using the SAMIR approach. The model itself was modified by first estimating soil water repellency from soil moisture, using an empirical relation taking into account repellent and non-repellent moisture thresholds for each site; and afterwards using soil water repellency as a limiting factor on soil wettability, by limiting the fraction of infiltration which could replenish soil moisture. Results indicate that this simple approach to simulate repellency can provide adequate model performance and can be easily included in hydrological models.
NASA Astrophysics Data System (ADS)
Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob
2014-05-01
The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust models, since this step is the bottle-neck of this technique. In the first approach, a plot-scale model was used to predict the temperature reached in samples collected in other plots from the same site. In a plot-scale model, all the heated aliquots come from a unique plot-scale sample. As expected, the results obtained with this approach were deceptive, because this approach was assuming that a plot-scale model would be enough to represent the whole variability of the site. The accuracy (measured as the root mean square error of prediction, thereinafter RMSEP) was 86ºC, and the bias was also high (>30ºC). In the second approach, the temperatures predicted through several plot-scale models were averaged. The accuracy was improved (RMSEP=65ºC) respect the first approach, because the variability from several plots was considered and biased predictions were partially counterbalanced. However, this approach implies more efforts, since several plot-scale models are needed. In the third approach, the predictions were obtained with site-scale models. These models were constructed with aliquots from several plots. In this case, the results were accurate, since the RMSEP was around 40ºC, the bias was very small (<1ºC) and the R2 was 0.92. As expected, this approach clearly outperformed the second approach, in spite of the fact that the same efforts were needed. In a plot-scale model, only one interaction between temperature and soil components was modelled. However, several different interactions between temperature and soil components were present in the calibration matrix of a site-scale model. Consequently, the site-scale models were able to model the temperature reached excluding the influence of the differences in soil composition, resulting in more robust models respect that variation. Summarizing, the results were highlighting the importance of an adequate strategy to develop robust and accurate models with moderate efforts, and how a wrong strategy can result in deceptive predictions.
Clow, David W.; Mast, M. Alisa
2010-01-01
Concentrations of weathering products in streams often show relatively little variation compared to changes in discharge, both at event and annual scales. In this study, several hypothesized mechanisms for this “chemostatic behavior” were evaluated, and the potential for those mechanisms to influence relations between climate, weathering fluxes, and CO2 consumption via mineral weathering was assessed. Data from Loch Vale, an alpine catchment in the Colorado Rocky Mountains, indicates that cation exchange and seasonal precipitation and dissolution of amorphous or poorly crystalline aluminosilicates are important processes that help regulate solute concentrations in the stream; however, those processes have no direct effect on CO2 consumption in catchments. Hydrograph separation analyses indicate that old water stored in the subsurface over the winter accounts for about one-quarter of annual streamflow, and almost one-half of annual fluxes of Na and SiO2 in the stream; thus, flushing of old water by new water (snowmelt) is an important component of chemostatic behavior. Hydrologic flushing of subsurface materials further induces chemostatic behavior by reducing mineral saturation indices and increasing reactive mineral surface area, which stimulate mineral weathering rates. CO2 consumption by carbonic acid mediated mineral weathering was quantified using mass-balance calculations; results indicated that silicate mineral weathering was responsible for approximately two-thirds of annual CO2 consumption, and carbonate weathering was responsible for the remaining one-third. CO2 consumption was strongly dependent on annual precipitation and temperature; these relations were captured in a simple statistical model that accounted for 71% of the annual variation in CO2 consumption via mineral weathering in Loch Vale.
NASA Astrophysics Data System (ADS)
Ercan, Mehmet Bulent
Watershed-scale hydrologic models are used for a variety of applications from flood prediction, to drought analysis, to water quality assessments. A particular challenge in applying these models is calibration of the model parameters, many of which are difficult to measure at the watershed-scale. A primary goal of this dissertation is to contribute new computational methods and tools for calibration of watershed-scale hydrologic models and the Soil and Water Assessment Tool (SWAT) model, in particular. SWAT is a physically-based, watershed-scale hydrologic model developed to predict the impact of land management practices on water quality and quantity. The dissertation follows a manuscript format meaning it is comprised of three separate but interrelated research studies. The first two research studies focus on SWAT model calibration, and the third research study presents an application of the new calibration methods and tools to study climate change impacts on water resources in the Upper Neuse Watershed of North Carolina using SWAT. The objective of the first two studies is to overcome computational challenges associated with calibration of SWAT models. The first study evaluates a parallel SWAT calibration tool built using the Windows Azure cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed using three watersheds of increasing size (the Eno, Upper Neuse, and Neuse) for both a 2 year and 10 year simulation duration. Leveraging the cloud as an on demand computing resource allowed for a significantly reduced calibration time such that calibration of the Neuse watershed went from taking 207 hours on a personal computer to only 3.4 hours using 256 cores in the Azure cloud. The second study aims at increasing SWAT model calibration efficiency by creating an open source, multi-objective calibration tool using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the methodological advances presented in the first two studies. Therefore, the dissertation contains three independent by interrelated studies that collectively advance the field of watershed-scale hydrologic modeling and analysis.
Water and the city (Henry Darcy Medal Lecture)
NASA Astrophysics Data System (ADS)
Rosso, Renzo
2010-05-01
Total world population is about six billion, half living in cities, one third living in slums. This figure has doubled from 1960, when urban population was less than one billion out of the total figure of 3 billion; no more than one fifth was estimated to live in slums at that time. Demography experts predict that population will be around 9 billion in 2050, two thirds (6 billion) living in urban areas, and no reasonable prediction is available for slums. History shows that water is a key factor of urbanization: springs and rivers played a fundamental role in determining where one could settle, and where we are settled now. Water availability is expected to be a major control of man's life in the next future of planet Earth. The daily municipal water withdrawal ranges from 80 to 150 liters per person in China, India and Brazil cities; can they pretend to get more than 600 liters as a US citizen currently does? The impact of natural disasters such as storms and floods is strongly linked to increasing vulnerability associated with urbanization. Are state-of-the-art mitigation policies effective in reducing this impact in both terms of human casualties and economic damage? These and similar questions are fundamental to address hydrological science and engineering hydrology in next years. This talk will approach some open problems arising from the impact of increasing urbanization on the water cycle and, mostly, the associated feedback on human life. These include the need for an insight of nonstationarity, transients and feedback control of hydrological processes; the merging of the space-time scales of hydrological processes with the spatial scales of the city, and the temporal scale of lifestyles; and the way for water scientist and engineers to be involved in the design of cities and the search for life styles coherent with a sustainable development approach.
NAS (Numerical Aerodynamic Simulation Program) technical summaries, March 1989 - February 1990
NASA Technical Reports Server (NTRS)
1990-01-01
Given here are selected scientific results from the Numerical Aerodynamic Simulation (NAS) Program's third year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP supercomputer. Topics covered include flow field analysis of fighter wing configurations, large-scale ocean modeling, the Space Shuttle flow field, advanced computational fluid dynamics (CFD) codes for rotary-wing airloads and performance prediction, turbulence modeling of separated flows, airloads and acoustics of rotorcraft, vortex-induced nonlinearities on submarines, and standing oblique detonation waves.
Third party involvement in barroom conflicts.
Parks, Michael J; Osgood, D Wayne; Felson, Richard B; Wells, Samantha; Graham, Kathryn
2013-01-01
This study examines the effect of situational variables on whether third parties intervene in conflicts in barroom settings, and whether they are aggressive or not when they intervene. Based on research on bystander intervention in emergencies, we hypothesized that third parties would be most likely to become involved in incidents with features that convey greater danger of serious harm. The situational variables indicative of danger were severity of aggression, whether the aggression was one-sided or mutual, gender, and level of intoxication of the initial participants in the conflict. Analyses consist of cross-tabulations and three-level Hierarchical Logistic Models (with bar, evening, and incidents as levels) for 860 incidents of verbal and physical aggression from 503 nights of observation in 87 large bars and clubs in Toronto, Canada. Third party involvement was more likely during incidents in which: (1) the aggression was more severe; (2) the aggression was mutual (vs. one-sided) aggression; (3) only males (vs. mixed gender) were involved; and (4) participants were more intoxicated. These incident characteristics were stronger predictors of non-aggressive third party involvement than aggressive third party involvement. The findings suggest that third parties are indeed responding to the perceived danger of serious harm. Improving our knowledge about this aspect of aggressive incidents is valuable for developing prevention and intervention approaches designed to reduce aggression in bars and other locations. © 2013 Wiley Periodicals, Inc.
Third Party Involvement in Barroom Conflicts
Parks, Michael J.; Osgood, D. Wayne; Felson, Richard B.; Wells, Samantha; Graham, Kathryn
2014-01-01
This study examines the effect of situational variables on whether third parties intervene in conflicts in barroom settings, and whether they are aggressive or not when they intervene. Based on research on bystander intervention in emergencies, we hypothesized that third parties would be most likely to become involved in incidents with features that convey greater danger of serious harm. The situational variables indicative of danger were severity of aggression, whether the aggression was one-sided or mutual, gender, and level of intoxication of the initial participants in the conflict. Analyses consist of cross-tabulations and three-level Hierarchical Logistic Models (with bar, evening, and incidents as levels) for 860 incidents of verbal and physical aggression from 503 nights of observation in 87 large bars and clubs in Toronto, Canada. Third party involvement was more likely during incidents in which: (1) the aggression was more severe; (2) the aggression was mutual (vs. one-sided) aggression; (3) only males (vs. mixed gender) were involved; and (4) participants were more intoxicated. These incident characteristics were stronger predictors of nonaggressive third party involvement than aggressive third party involvement. The findings suggest that third parties are indeed responding to the perceived danger of serious harm. Improving our knowledge about this aspect of aggressive incidents is valuable for developing prevention and intervention approaches designed to reduce aggression in bars and other locations. PMID:23494773
Improved scar quality following primary and secondary healing of cutaneous wounds.
Atiyeh, Bishara S; Amm, Christian A; El Musa, Kusai A
2003-01-01
Poor wound healing remains a critical problem in our daily practice of surgery, exerting a heavy toll on our patients as well as on the health care system. In susceptible individuals, scars can become raised, reddish, and rigid, may cause itching and pain, and might even lead to serious cosmetic and functional problems. Hypertrophic scars do not occur spontaneously in animals, which explains the lack of experimental models for the study of pathologic scar modulation. We present the results of three clinical comparative prospective studies that we have conducted. In the first study, secondary healing and cosmetic appearance following healing of partial thickness skin graft donor sites under dry (semi-open Sofra-Tulle dressing) and moist (moist exposed burn ointment, MEBO) was assessed. In the second study, healing of the donor sites was evaluated following treatment with Tegaderm or MEBO, two different types of moisture-retentive dressings. In the third study, 3 comparable groups of primarily healed wounds were evaluated. One group was treated by topical antibiotic ointment, the second group was treated by Moist Exposed Burn Ointment (MEBO), and the third group did not receive any topical treatment. In the second study, secondary healing of partial thickness skin graft donor sites was evaluated following treatment with Tegaderm or MEBO, two different types of moisture-retentive dressings. In the second and third studies, healed wounds were evaluated with the quantitative scale for scar assessment described by Beausang et al. Statistical analysis revealed that for both types of wound healing, scar quality was significantly superior in those wounds treated with MEBO.
Riese, Alison; Rappaport, Leah; Alverson, Brian; Park, Sangshin; Rockney, Randal M
2017-06-01
Clinical performance evaluations are major components of medical school clerkship grades. But are they sufficiently objective? This study aimed to determine whether student and evaluator gender is associated with assessment of overall clinical performance. This was a retrospective analysis of 4,272 core clerkship clinical performance evaluations by 829 evaluators of 155 third-year students, within the Alpert Medical School grading database for the 2013-2014 academic year. Overall clinical performance, assessed on a three-point scale (meets expectations, above expectations, exceptional), was extracted from each evaluation, as well as evaluator gender, age, training level, department, student gender and age, and length of observation time. Hierarchical ordinal regression modeling was conducted to account for clustering of evaluations. Female students were more likely to receive a better grade than males (adjusted odds ratio [AOR] 1.30, 95% confidence interval [CI] 1.13-1.50), and female evaluators awarded lower grades than males (AOR 0.72, 95% CI 0.55-0.93), adjusting for department, observation time, and student and evaluator age. The interaction between student and evaluator gender was significant (P = .03), with female evaluators assigning higher grades to female students, while male evaluators' grading did not differ by student gender. Students who spent a short time with evaluators were also more likely to get a lower grade. A one-year examination of all third-year clerkship clinical performance evaluations at a single institution revealed that male and female evaluators rated male and female students differently, even when accounting for other measured variables.
ACME-III and ACME-IV Final Campaign Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biraud, S. C.
2016-01-01
The goals of the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s third and fourth Airborne Carbon Measurements (ACME) field campaigns, ACME-III and ACME-IV, are: 1) to measure and model the exchange of CO 2, water vapor, and other greenhouse gases by the natural, agricultural, and industrial ecosystems of the Southern Great Plains (SGP) region; 2) to develop quantitative approaches to relate these local fluxes to the concentration of greenhouse gases measured at the Central Facility tower and in the atmospheric column above the ARM SGP Central Facility, 3) to develop and test bottom-up measurement and modeling approaches to estimate regionalmore » scale carbon balances, and 4) to develop and test inverse modeling approaches to estimate regional scale carbon balance and anthropogenic sources over continental regions. Regular soundings of the atmosphere from near the surface into the mid-troposphere are essential for this research.« less
Multi-scale modeling of irradiation effects in spallation neutron source materials
NASA Astrophysics Data System (ADS)
Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.
2011-07-01
Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.
A Distribution-Free Description of Fragmentation by Blasting Based on Dimensional Analysis
NASA Astrophysics Data System (ADS)
Sanchidrián, José A.; Ouchterlony, Finn
2017-04-01
A model for fragmentation in bench blasting is developed from dimensional analysis adapted from asteroid collision theory, to which two factors have been added: one describing the discontinuities spacing and orientation and another the delay between successive contiguous shots. The formulae are calibrated by nonlinear fits to 169 bench blasts in different sites and rock types, bench geometries and delay times, for which the blast design data and the size distributions of the muckpile obtained by sieving were available. Percentile sizes of the fragments distribution are obtained as the product of a rock mass structural factor, a rock strength-to-explosive energy ratio, a bench shape factor, a scale factor or characteristic size and a function of the in-row delay. The rock structure is described by means of the joints' mean spacing and orientation with respect to the free face. The strength property chosen is the strain energy at rupture that, together with the explosive energy density, forms a combined rock strength/explosive energy factor. The model is applicable from 5 to 100 percentile sizes, with all parameters determined from the fits significant to a 0.05 level. The expected error of the prediction is below 25% at any percentile. These errors are half to one-third of the errors expected with the best prediction models available to date.
Auditing the multiply-related concepts within the UMLS
Mougin, Fleur; Grabar, Natalia
2014-01-01
Objective This work focuses on multiply-related Unified Medical Language System (UMLS) concepts, that is, concepts associated through multiple relations. The relations involved in such situations are audited to determine whether they are provided by source vocabularies or result from the integration of these vocabularies within the UMLS. Methods We study the compatibility of the multiple relations which associate the concepts under investigation and try to explain the reason why they co-occur. Towards this end, we analyze the relations both at the concept and term levels. In addition, we randomly select 288 concepts associated through contradictory relations and manually analyze them. Results At the UMLS scale, only 0.7% of combinations of relations are contradictory, while homogeneous combinations are observed in one-third of situations. At the scale of source vocabularies, one-third do not contain more than one relation between the concepts under investigation. Among the remaining source vocabularies, seven of them mainly present multiple non-homogeneous relations between terms. Analysis at the term level also shows that only in a quarter of cases are the source vocabularies responsible for the presence of multiply-related concepts in the UMLS. These results are available at: http://www.isped.u-bordeaux2.fr/ArticleJAMIA/results_multiply_related_concepts.aspx. Discussion Manual analysis was useful to explain the conceptualization difference in relations between terms across source vocabularies. The exploitation of source relations was helpful for understanding why some source vocabularies describe multiple relations between a given pair of terms. PMID:24464853
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleck, J.A. Jr.; Morris, J.R.; Thompson, P.F.
1976-10-01
The FLAC code (Fourier Laser Amplifier Code) was used to simulate the CYCLOPS laser system up to the third B-module and to calculate the maximum ripple gain spectrum. The model of this portion of CYCLOPS consists of 33 segments that correspond to 20 optical elements (simulation of the cell requires 2 segments and 12 external air spaces). (MHR)
BURNOUT SYNDROME AMONG EDUCATORS IN PRE-SCHOOL INSTITUTIONS.
Hozo, Endica Radic; Sucic, Goran; Zaja, Ivan
2015-12-01
The occurrence of burnout syndrome (BS) has been recognized in many professions (pilots, firefighters, police officers, doctors…) that during their work are subjected to high levels of stress. For educators in preschool institutions stress level is very high thus creating the possibility of developing BS. For this research is selected preschool institution - kindergarten "Radost" (Joy) in Split, in which by use of questionnaires (modified scale by Freudenberger and modified scales by Girdin, Everly and Dusek) during 2014 among educators (100 respondents) is conducted a survey regarding the frequency of burnout syndrome. According to questionnaires by Girdin, Everly and Dusek there is no statistically significant difference between the number of educators who feel good and those that are under significant stress (χ2=1.04; p=0.307). According to questionnaire by Freudenberg educators are classified into 3 categories and distribution of educators by the groups is almost uniform (χ2=2.76; p=0.250), which means that one third of a teacher is in good condition, a third is in the risk area for burn-out syndrome, while one third are candidates for development of this syndrome. Comparing a teacher in good condition compared to other (at risk and those who are candidates for the burn-out syndrome) is up to 1.5 times higher in those who are at risk and the candidates for development of this syndrome than in others (χ2=4.5; p=0.033). The occurrence of burnout syndrome is very high for the group of educators (half of the educators!) in pre-school institutions which should be taken into account by the institutions management. For this purpose, it is necessary to organize regular medical check-ups with particular reference to burnout syndrome with signs of the syndrome to prevent its further development.
Long-term pain, fatigue, and impairment in neuralgic amyotrophy.
van Alfen, Nens; van der Werf, Sieberen P; van Engelen, Baziel G
2009-03-01
Recently, it has become clear that neuralgic amyotrophy (NA; idiopathic and hereditary brachial plexus neuropathy) has a less optimistic prognosis than usually assumed. To optimize treatment and management of these patients, one needs to know the residual symptoms and impairments they suffer. Therefore, the objective of this study was to describe the prevalence of pain, psychologic symptoms, fatigue, functional status, and quality of life in patients with NA. Neurology outpatient department of an academic teaching hospital. NA patients (N=89) were studied, and clinical details were recorded. Self-report data were on average collected 2 years after the onset of the last NA episode. Pain was assessed with the McGill Pain Questionnaire, fatigue with the Checklist Individual Strength, and psychologic distress with the Symptom Checklist 90. Functional status and handicap were assessed with the modified Rankin Scale and Medical Outcomes Study 36-Item Short-Form Health Survey. Pain was usually localized in the right shoulder and upper arm, matching the clinical predilection site for paresis in NA. About a quarter to a third of the patients reported significant long-term pain and fatigue, and half to two thirds still experienced impairments in daily life. Over one third of the individual patients suffered from severe fatigue. The group did not fulfill the criteria of chronic fatigue or major psychologic distress. There was no correlation of pain or fatigue with the level of residual paresis on a Medical Research Council scale, but patients with a comorbid condition fared worse than patients without. A significant number of NA patients suffer from persistent pain and fatigue, leading to impairment. Symptoms were not correlated with psychologic distress. This makes it likely that they are caused by residual shoulder or arm dysfunction but not as part of a chronic pain or fatigue syndrome in these patients.
ERIC Educational Resources Information Center
Penketh, Victoria; Hare, Dougal Julian; Flood, Andrea; Walker, Samantha
2014-01-01
Background: The Manchester Attachment Scale-Third party observational measure (MAST) was developed to assess secure attachment style for adults with intellectual disabilities. The psychometric properties of the MAST were examined. Materials and Methods: Professional carers (N = 40) completed the MAST and measures related to the construct of…
ERIC Educational Resources Information Center
Cockshott, Felicity C.; Marsh, Nigel V.; Hine, Donald W.
2006-01-01
A confirmatory factor analysis was conducted on the Wechsler Intelligence Scale for Children-Third Edition (WISC-III; D. Wechsler, 1991) with a sample of 579 Australian children referred for assessment because of academic difficulties in the classroom. The children were administered the WISC-III as part of the initial eligibility determination…
Lin, Chao-Cheng; Bai, Ya-Mei; Chen, Jen-Yeu; Hwang, Tzung-Jeng; Chen, Tzu-Ting; Chiu, Hung-Wen; Li, Yu-Chuan
2010-03-01
Metabolic syndrome (MetS) is an important side effect of second-generation antipsychotics (SGAs). However, many SGA-treated patients with MetS remain undetected. In this study, we trained and validated artificial neural network (ANN) and multiple logistic regression models without biochemical parameters to rapidly identify MetS in patients with SGA treatment. A total of 383 patients with a diagnosis of schizophrenia or schizoaffective disorder (DSM-IV criteria) with SGA treatment for more than 6 months were investigated to determine whether they met the MetS criteria according to the International Diabetes Federation. The data for these patients were collected between March 2005 and September 2005. The input variables of ANN and logistic regression were limited to demographic and anthropometric data only. All models were trained by randomly selecting two-thirds of the patient data and were internally validated with the remaining one-third of the data. The models were then externally validated with data from 69 patients from another hospital, collected between March 2008 and June 2008. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of all models. Both the final ANN and logistic regression models had high accuracy (88.3% vs 83.6%), sensitivity (93.1% vs 86.2%), and specificity (86.9% vs 83.8%) to identify MetS in the internal validation set. The mean +/- SD AUC was high for both the ANN and logistic regression models (0.934 +/- 0.033 vs 0.922 +/- 0.035, P = .63). During external validation, high AUC was still obtained for both models. Waist circumference and diastolic blood pressure were the common variables that were left in the final ANN and logistic regression models. Our study developed accurate ANN and logistic regression models to detect MetS in patients with SGA treatment. The models are likely to provide a noninvasive tool for large-scale screening of MetS in this group of patients. (c) 2010 Physicians Postgraduate Press, Inc.
Models for electricity market efficiency and bidding strategy analysis
NASA Astrophysics Data System (ADS)
Niu, Hui
This dissertation studies models for the analysis of market efficiency and bidding behaviors of market participants in electricity markets. Simulation models are developed to estimate how transmission and operational constraints affect the competitive benchmark and market prices based on submitted bids. This research contributes to the literature in three aspects. First, transmission and operational constraints, which have been neglected in most empirical literature, are considered in the competitive benchmark estimation model. Second, the effects of operational and transmission constraints on market prices are estimated through two models based on the submitted bids of market participants. Third, these models are applied to analyze the efficiency of the Electric Reliability Council Of Texas (ERCOT) real-time energy market by simulating its operations for the time period from January 2002 to April 2003. The characteristics and available information for the ERCOT market are considered. In electricity markets, electric firms compete through both spot market bidding and bilateral contract trading. A linear asymmetric supply function equilibrium (SFE) model with transmission constraints is proposed in this dissertation to analyze the bidding strategies with forward contracts. The research contributes to the literature in several aspects. First, we combine forward contracts, transmission constraints, and multi-period strategy (an obligation for firms to bid consistently over an extended time horizon such as a day or an hour) into the linear asymmetric supply function equilibrium framework. As an ex-ante model, it can provide qualitative insights into firms' behaviors. Second, the bidding strategies related to Transmission Congestion Rights (TCRs) are discussed by interpreting TCRs as linear combination of forwards. Third, the model is a general one in the sense that there is no limitation on the number of firms and scale of the transmission network, which can have asymmetric linear marginal cost structures. In addition to theoretical analysis, we apply our model to simulate the ERCOT real-time market from January 2002 to April 2003. The effects of forward contracts on the ERCOT market are evaluated through the results. It is shown that the model is able to capture features of bidding behavior in the market.
Economics of Third-Party Central Heating Plants to Supply the Army
1992-01-01
Third-Party Gas-Fired Boiler Economics 52 APPENDIX C: Third-Party Gas Turbine Cogeneration Economics ( PURPA ) 58 APPENDIX D: Government Gas Turbine...Turbine Cogeneration Economics (Installation and PURPA Purchase) 76 APPENDIX G: Checklist for Identifying Optimal Third-Party Projects and Bidders 82...of scale 37 4 Relative costs of thermal energy from third-party cogeneration plants (@ 4C/kWh PURPA payment) 38 5 Comparison of life-cycle costs for
SMEs, IT, and the Third Space: Colonization and Creativity in the Theatre Industry
NASA Astrophysics Data System (ADS)
Kendall, Julie E.; Kendall, Kenneth E.
We examine how small and medium-sized, professional, nonprofit performing arts theatres in the US can improve the strategic use of information technology (IT), as well as other aspects of theatre management for large, commercial theatre productions in the West End of London and on Broadway in New York City. In this article we use the epistemology of the third space developed by Bhabha (1994) and extended by Frenkel (2008). Although both authors were discussing knowledge transfer, we use their conceptualizations to characterize and explore more deeply the transfer process of culture (and thereby useful practices and worthwhile lessons) from small and medium-sized professional, nonprofit theaters to large-scale commercial theatres. We include a discussion of Nonaka’s (1991) concept of ba, and how it relates to the third space. We specifically employ the metaphor of the third space developed by Bhabha (1994) to critique and understand the verbal and nonverbal cultural transmissions between small and large theatres. One of our contributions is to use the conceptualization and metaphor of the third space to understand the complex exchanges and relationships between small to medium-sized nonprofit professional theatres and large commercial theatres, and to identify what large commercial productions can learn from nonprofit theatres from these exchanges.
The Solar System Ballet: A Kinesthetic Spatial Astronomy Activity
NASA Astrophysics Data System (ADS)
Heyer, Inge; Slater, T. F.; Slater, S. J.; Astronomy, Center; Education ResearchCAPER, Physics
2011-05-01
The Solar System Ballet was developed in order for students of all ages to learn about the planets, their motions, their distances, and their individual characteristics. To teach people about the structure of our Solar System can be revealing and rewarding, for students and teachers. Little ones (and some bigger ones, too) often cannot yet grasp theoretical and spatial ideas purely with their minds. Showing a video is better, but being able to learn with their bodies, essentially being what they learn about, will help them understand and remember difficult concepts much more easily. There are three segments to this activity, which can be done together or separately, depending on time limits and age of the students. Part one involves a short introductory discussion about what students know about the planets. Then students will act out the orbital motions of the planets (and also moons for the older ones) while holding a physical model. During the second phase we look at the structure of the Solar System as well as the relative distances of the planets from the Sun, first by sketching it on paper, then by recreating a scaled version in the class room. Again the students act out the parts of the Solar System bodies with their models. The third segment concentrates on recreating historical measurements of Earth-Moon-Sun system. The Solar System Ballet activity is suitable for grades K-12+ as well as general public informal learning activities.
USA: Economics, Politics, Ideology, Number 7, July 1977.
1977-08-01
viewpoint of one of its domestic political goals. Americans’ attention is artificially distracted from both the real socioeconomic problems and the real...34third basket" cannot be artificially singled out of the broad complex of questions considered in the final act. The questions of war and peace which...scientific advisory staff was stimulated even more by the successful launching of the first Soviet artificial earth satellite, which evoked mass-scale
Multiscale analysis of the invariants of the velocity gradient tensor in isotropic turbulence
NASA Astrophysics Data System (ADS)
Danish, Mohammad; Meneveau, Charles
2018-04-01
Knowledge of local flow-topology, the patterns of streamlines around a moving fluid element as described by the velocity-gradient tensor, is useful for developing insights into turbulence processes, such as energy cascade, material element deformation, or scalar mixing. Much has been learned in the recent past about flow topology at the smallest (viscous) scales of turbulence. However, less is known at larger scales, for instance, at the inertial scales of turbulence. In this work, we present a detailed study on the scale dependence of various quantities of interest, such as the population fraction of different types of flow-topologies, the joint probability distribution of the second and third invariants of the velocity gradient tensor, and the geometrical alignment of vorticity with strain-rate eigenvectors. We perform the analysis on a simulation dataset of isotropic turbulence at Reλ=433 . While quantities appear close to scale invariant in the inertial range, we observe a "bump" in several quantities at length scales between the inertial and viscous ranges. For instance, the population fraction of unstable node-saddle-saddle flow topology shows an increase when reducing the scale from the inertial entering the viscous range. A similar bump is observed for the vorticity-strain-rate alignment. In order to document possible dynamical causes for the different trends in the viscous and inertial ranges, we examine the probability fluxes appearing in the Fokker-Plank equation governing the velocity gradient invariants. Specifically, we aim to understand whether the differences observed between the viscous and inertial range statistics are due to effects caused by pressure, subgrid-scale, or viscous stresses or various combinations of these terms. To decompose the flow into small and large scales, we mainly use a spectrally compact non-negative filter with good spatial localization properties (Eyink-Aluie filter). The analysis shows that when going from the inertial range into the viscous range, the subgrid-stress effect decreases more rapidly as a function of scale than the viscous effects increase. To make up for the difference, the pressure Hessian also behaves somewhat differently in the viscous than in the inertial range. The results have implications for models for the velocity gradient tensor showing that the effects of subgrid scales may not be simply modeled via a constant eddy viscosity in the inertial range if one wishes to reproduce the observed trends.
Life satisfaction and social support received by women in the perinatal period.
Gebuza, Grażyna; Kaźmierczak, Marzena; Mieczkowska, Estera; Gierszewska, Małgorzata; Kotzbach, Roman
2014-01-01
Birth of a baby has a big impact on women's lives. The presence and help of loved ones favours wellbeing, health, coping with difficult situations. The aim of this study was to determine whether women's satisfaction with life changes during pregnancy and after delivery, and to identify correlates of life satisfaction. Life satisfaction was measured using The Satisfaction with Life Scale - SWLS and received social support was assessed using the Berlin Social Support Scales - BSSS. The study was conducted in the third trimester of pregnancy and during the postpartum period, before discharge from the hospital. The research sample included a total of 199 women in the third trimester of pregnancy and 188 of initially participating women, who had physiological births or caesarean sections. The results clearly show a significant increase in life satisfaction in the postpartum period (p < 0.0001). An important correlate of life satisfaction in the third trimester of pregnancy is social support received (p < 0.0001). During pregnancy such a correlate is emotional support received, and in the postnatal period- instrumental support received. An increase in instrumental support received (p = 0.031) and informational (p = 0.013) has been observed in the postpartum period. The assessment of life satisfaction and received social support seem to be needed to gain a full picture of women's situation during birth, which will allow for planning and implementing maternity care appropriate to the needs of women.
This paper proposes a general procedure to link meteorological data with air quality models, such as U.S. EPA's Models-3 Community Multi-scale Air Quality (CMAQ) modeling system. CMAQ is intended to be used for studying multi-scale (urban and regional) and multi-pollutant (ozon...
Dirac cones in isogonal hexagonal metallic structures
NASA Astrophysics Data System (ADS)
Wang, Kang
2018-03-01
A honeycomb hexagonal metallic lattice is equivalent to a triangular atomic one and cannot create Dirac cones in its electromagnetic wave spectrum. We study in this work the low-frequency electromagnetic band structures in isogonal hexagonal metallic lattices that are directly related to the honeycomb one and show that such structures can create Dirac cones. The band formation can be described by a tight-binding model that allows investigating, in terms of correlations between local resonance modes, the condition for the Dirac cones and the consequence of the third structure tile sustaining an extra resonance mode in the unit cell that induces band shifts and thus nonlinear deformation of the Dirac cones following the wave vectors departing from the Dirac points. We show further that, under structure deformation, the deformations of the Dirac cones result from two different correlation mechanisms, both reinforced by the lattice's metallic nature, which directly affects the resonance mode correlations. The isogonal structures provide new degrees of freedom for tuning the Dirac cones, allowing adjustment of the cone shape by modulating the structure tiles at the local scale without modifying the lattice periodicity and symmetry.
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine
NASA Astrophysics Data System (ADS)
White, John
Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.
NASA Astrophysics Data System (ADS)
Zelazowski, Przemyslaw; Huntingford, Chris; Mercado, Lina M.; Schaller, Nathalie
2018-02-01
Global circulation models (GCMs) are the best tool to understand climate change, as they attempt to represent all the important Earth system processes, including anthropogenic perturbation through fossil fuel burning. However, GCMs are computationally very expensive, which limits the number of simulations that can be made. Pattern scaling is an emulation technique that takes advantage of the fact that local and seasonal changes in surface climate are often approximately linear in the rate of warming over land and across the globe. This allows interpolation away from a limited number of available GCM simulations, to assess alternative future emissions scenarios. In this paper, we present a climate pattern-scaling set consisting of spatial climate change patterns along with parameters for an energy-balance model that calculates the amount of global warming. The set, available for download, is derived from 22 GCMs of the WCRP CMIP3 database, setting the basis for similar eventual pattern development for the CMIP5 and forthcoming CMIP6 ensemble. Critically, it extends the use of the IMOGEN (Integrated Model Of Global Effects of climatic aNomalies) framework to enable scanning across full uncertainty in GCMs for impact studies. Across models, the presented climate patterns represent consistent global mean trends, with a maximum of 4 (out of 22) GCMs exhibiting the opposite sign to the global trend per variable (relative humidity). The described new climate regimes are generally warmer, wetter (but with less snowfall), cloudier and windier, and have decreased relative humidity. Overall, when averaging individual performance across all variables, and without considering co-variance, the patterns explain one-third of regional change in decadal averages (mean percentage variance explained, PVE, 34.25 ± 5.21), but the signal in some models exhibits much more linearity (e.g. MIROC3.2(hires): 41.53) than in others (GISS_ER: 22.67). The two most often considered variables, near-surface temperature and precipitation, have a PVE of 85.44 ± 4.37 and 14.98 ± 4.61, respectively. We also provide an example assessment of a terrestrial impact (changes in mean runoff) and compare projections by the IMOGEN system, which has one land surface model, against direct GCM outputs, which all have alternative representations of land functioning. The latter is noted as an additional source of uncertainty. Finally, current and potential future applications of the IMOGEN version 2.0 modelling system in the areas of ecosystem modelling and climate change impact assessment are presented and discussed.
Postglacial migration supplements climate in determining plant species ranges in Europe
Normand, Signe; Ricklefs, Robert E.; Skov, Flemming; Bladt, Jesper; Tackenberg, Oliver; Svenning, Jens-Christian
2011-01-01
The influence of dispersal limitation on species ranges remains controversial. Considering the dramatic impacts of the last glaciation in Europe, species might not have tracked climate changes through time and, as a consequence, their present-day ranges might be in disequilibrium with current climate. For 1016 European plant species, we assessed the relative importance of current climate and limited postglacial migration in determining species ranges using regression modelling and explanatory variables representing climate, and a novel species-specific hind-casting-based measure of accessibility to postglacial colonization. Climate was important for all species, while postglacial colonization also constrained the ranges of more than 50 per cent of the species. On average, climate explained five times more variation in species ranges than accessibility, but accessibility was the strongest determinant for one-sixth of the species. Accessibility was particularly important for species with limited long-distance dispersal ability, with southern glacial ranges, seed plants compared with ferns, and small-range species in southern Europe. In addition, accessibility explained one-third of the variation in species' disequilibrium with climate as measured by the realized/potential range size ratio computed with niche modelling. In conclusion, we show that although climate is the dominant broad-scale determinant of European plant species ranges, constrained dispersal plays an important supplementary role. PMID:21543356
Validation of the Turkish Cervical Cancer and Human Papilloma Virus Awareness Questionnaire.
Özdemir, E; Kısa, S
2016-09-01
The aim of this study was to determine the validity and reliability of the 'Cervical Cancer and Human Papilloma Virus Awareness Questionnaire' among fertility age women by adapting the scale into Turkish. Cervical cancer is the fourth most commonly form seen among women. Death from cervical cancer ranks third among causes and is one of the most preventable forms of cancer. This cross-sectional study included 360 women from three family health centres between January 5 and June 25, 2014. Internal consistency showed that the Kuder-Richardson 21 reliability coefficient in the first part was 0.60, Cronbach's alpha reliability coefficient was 0.61 in the second part. The Kaiser-Meyer-Olkin value of the items on the scale was 0.712. The Barlett test was significant. The confirmatory factor analysis indicated that the model matched the data adequately. This study shows that the Turkish version of the instrument is a valid and reliable tool to evaluate knowledge, perceptions and preventive behaviours of women regarding human papilloma virus and cervical cancer. Nurses who work in the clinical and primary care settings need to screen, detect and refer women who may be at risk from cervical cancer. © 2016 International Council of Nurses.
Wanted: A Positive Control for Anomalous Subdiffusion
Saxton, Michael J.
2012-01-01
Anomalous subdiffusion in cells and model systems is an active area of research. The main questions are whether diffusion is anomalous or normal, and if it is anomalous, its mechanism. The subject is controversial, especially the hypothesis that crowding causes anomalous subdiffusion. Anomalous subdiffusion measurements would be strengthened by an experimental standard, particularly one able to cross-calibrate the different types of measurements. Criteria for a calibration standard are proposed. First, diffusion must be anomalous over the length and timescales of the different measurements. The length-scale is fundamental; the time scale can be adjusted through the viscosity of the medium. Second, the standard must be theoretically well understood, with a known anomalous subdiffusion exponent, ideally readily tunable. Third, the standard must be simple, reproducible, and independently characterizable (by, for example, electron microscopy for nanostructures). Candidate experimental standards are evaluated, including obstructed lipid bilayers; aqueous systems obstructed by nanopillars; a continuum percolation system in which a prescribed fraction of randomly chosen obstacles in a regular array is ablated; single-file diffusion in pores; transient anomalous subdiffusion due to binding of particles in arrays such as transcription factors in randomized DNA arrays; and computer-generated physical trajectories. PMID:23260043
NASA Technical Reports Server (NTRS)
Shie, C.-L.; Tao, W.-K.; Hou, A.; Lin, X.
2006-01-01
The GCE (Goddard Cumulus Ensemble) model, which has been developed and improved at NASA Goddard Space Flight Center over the past two decades, is considered as one of the finer and state-of-the-art CRMs (Cloud Resolving Models) in the research community. As the chosen CRM for a NASA Interdisciplinary Science (IDS) Project, GCE has recently been successfully upgraded into an MPI (Message Passing Interface) version with which great improvement has been achieved in computational efficiency, scalability, and portability. By basically using the large-scale temperature and moisture advective forcing, as well as the temperature, water vapor and wind fields obtained from TRMM (Tropical Rainfall Measuring Mission) field experiments such as SCSMEX (South China Sea Monsoon Experiment) and KWAJEX (Kwajalein Experiment), our recent 2-D and 3-D GCE simulations were able to capture detailed convective systems typical of the targeted (simulated) regions. The GEOS-3 [Goddard EOS (Earth Observing System) Version-3] reanalysis data have also been proposed and successfully implemented for usage in the proposed/performed GCE long-term simulations (i.e., aiming at producing massive simulated cloud data -- Cloud Library) in compensating the scarcity of real field experimental data in both time and space (location). Preliminary 2-D or 3-D pilot results using GEOS-3 data have generally showed good qualitative agreement (yet some quantitative difference) with the respective numerical results using the SCSMEX observations. The first objective of this paper is to ensure the GEOS-3 data quality by comparing the model results obtained from several pairs of simulations using the real observations and GEOS-3 reanalysis data. The different large-scale advective forcing obtained from these two kinds of resources (i.e., sounding observations and GEOS-3 reanalysis) has been considered as a major critical factor in producing various model results. The second objective of this paper is therefore to investigate and present such an impact of large-scale forcing on various modeled quantities (such as hydrometeors, rainfall, and etc.). A third objective is to validate the overall GCE 3-D model performance by comparing the numerical results with sounding observations, as well as available satellite retrievals.
Downscaling scheme to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, Annika; Venema, Victor; Lindau, Ralf; Ament, Felix; Simmer, Clemens
2010-05-01
The earth's surface is characterized by heterogeneity at a broad range of scales. Weather forecast models and climate models are not able to resolve this heterogeneity at the smaller scales. Many processes in the soil or at the surface, however, are highly nonlinear. This holds, for example, for evaporation processes, where stomata or aerodynamic resistances are nonlinear functions of the local micro-climate. Other examples are threshold dependent processes, e.g., the generation of runoff or the melting of snow. It has been shown that using averaged parameters in the computation of these processes leads to errors and especially biases, due to the involved nonlinearities. Thus it is necessary to account for the sub-grid scale surface heterogeneities in atmospheric modeling. One approach to take the variability of the earth's surface into account is the mosaic approach. Here the soil-vegetation-atmosphere transfer (SVAT) model is run on an explicit higher resolution than the atmospheric part of a coupled model, which is feasible due to generally lower computational costs of a SVAT model compared to the atmospheric part. The question arises how to deal with the scale differences at the interface between the two resolutions. Usually the assumption of a homogeneous forcing for all sub-pixels is made. However, over a heterogeneous surface, usually the boundary layer is also heterogeneous. Thus, by assuming a constant atmospheric forcing again biases in the turbulent heat fluxes may occur due to neglected atmospheric forcing variability. Therefore we have developed and tested a downscaling scheme to disaggregate the atmospheric variables of the lower atmosphere that are used as input to force a SVAT model. Our downscaling scheme consists of three steps: 1) a bi-quadratic spline interpolation of the coarse-resolution field; 2) a "deterministic" part, where relationships between surface and near-surface variables are exploited; and 3) a noise-generation step, in which the still missing, not explained, variance is added as noise. The scheme has been developed and tested based on high-resolution (400 m) model output of the weather forecast (and regional climate) COSMO model. Downscaling steps 1 and 2 reduce the error made by the homogeneous assumption considerably, whereas the third step leads to close agreement of the sub-grid scale variance with the reference. This is, however, achieved at the cost of higher root mean square errors. Thus, before applying the downscaling system to atmospheric data a decision should be made whether the lowest possible errors (apply only downscaling step 1 and 2) or a most realistic sub-grid scale variability (apply also step 3) is desired. This downscaling scheme is currently being implemented into the COSMO model, where it will be used in combination with the mosaic approach. However, this downscaling scheme can also be applied to drive stand-alone SVAT models or hydrological models, which usually also need high-resolution atmospheric forcing data.
NASA Technical Reports Server (NTRS)
Myhre, Gunnar; Aas, Wenche; Ribu, Cherian; Collins, William; Faluvegi, Gregory S.; Flanner, Mark; Forster, Piers; Hodnebrog, Oivind; Klimont, Zbigniew; Lund, Marianne T.
2017-01-01
Over the past few decades, the geographical distribution of emissions of substances that alter the atmospheric energy balance has changed due to economic growth and air pollution regulations. Here, we show the resulting changes to aerosol and ozone abundances and their radiative forcing using recently updated emission data for the period 1990-2015, as simulated by seven global atmospheric composition models. The models broadly reproduce large-scale changes in surface aerosol and ozone based on observations (e.g. 1 to 3 percent per year in aerosols over the USA and Europe). The global mean radiative forcing due to ozone and aerosol changes over the 1990-2015 period increased by 0.17 plus or minus 0.08 watts per square meter, with approximately one-third due to ozone. This increase is more strongly positive than that reported in IPCC AR5 (Intergovernmental Panel on Climate Change Fifth Assessment Report). The main reasons for the increased positive radiative forcing of aerosols over this period are the substantial reduction of global mean SO2 emissions, which is stronger in the new emission inventory compared to that used in the IPCC analysis, and higher black carbon emissions.
NASA Astrophysics Data System (ADS)
Ivashchuk, V. D.; Kobtsev, A. A.
2018-02-01
A D-dimensional gravitational model with a Gauss-Bonnet term and the cosmological term Λ is studied. We assume the metrics to be diagonal cosmological ones. For certain fine-tuned Λ , we find a class of solutions with exponential time dependence of two scale factors, governed by two Hubble-like parameters H >0 and h, corresponding to factor spaces of dimensions 3 and l > 2, respectively and D = 1 + 3 + l. The fine-tuned Λ = Λ (x, l, α ) depends upon the ratio h/H = x, l and the ratio α = α _2/α _1 of two constants (α _2 and α _1) of the model. For fixed Λ , α and l > 2 the equation Λ (x,l,α ) = Λ is equivalent to a polynomial equation of either fourth or third order and may be solved in radicals (the example l =3 is presented). For certain restrictions on x we prove the stability of the solutions in a class of cosmological solutions with diagonal metrics. A subclass of solutions with small enough variation of the effective gravitational constant G is considered. It is shown that all solutions from this subclass are stable.
A study of 35-ghz radar-assisted orbital maneuvering vehicle/space telescope docking
NASA Technical Reports Server (NTRS)
Mcdonald, M. W.
1986-01-01
An experiment was conducted to study the effects of measuring range and range rate information from a complex radar target (a one-third scale model of the Edwin P. Hubble Space Telescope). The radar ranging system was a 35-GHz frequency-modulated continuous wave unit developed in the Communication Systems Branch of the Information and Electronic Systems Laboratory at Marshall Space Flight Cneter. Measurements were made over radar-to-target distances of 5 meters to 15 meters to simulate the close distance realized in the final stages of space vehicle docking. The Space Telescope model target was driven by an antenna positioner through a range of azimuth and elevation (pitch) angles to present a variety of visual aspects of the aft end to the radar. Measurements were obtained with and without a cube corner reflector mounted in the center of the aft end of the model. The results indicate that range and range rate measurements are performed significantly more accurately with the cooperative radar reflector affixed. The results further reveal that range rate (velocity) can be measured accurately enough to support the required soft docking with the Space Telescope.
Wakeley, Heather L; Hendrickson, Chris T; Griffin, W Michael; Matthews, H Scott
2009-04-01
The combination of current and planned 2007 U.S. ethanol production capacity is 50 billion L/yr, one-third of the Energy Independence and Security Act of 2007 (EISA) target of 136 billion L of biofuels by 2022. In this study, we evaluate transportation impacts and infrastructure requirements for the use of E85 (85% ethanol, 15% gasoline) in light-duty vehicles using a combination of corn and cellulosic ethanol. Ethanol distribution is modeled using a linear optimization model. Estimated average delivered ethanol costs, in 2005 dollars, range from $0.29 to $0.62 per liter ($1.3-2.8 per gallon), depending on transportation distance and mode. Emissions from ethanol transport estimated in this work are up to 2 times those in previous ethanol LCA studies and thus lead to larger total life cycle effects. Long-distance transport of ethanol to the end user can negate ethanol's potential economic and environmental benefits relative to gasoline. To reduce costs, we recommend regional concentration of E85 blends for future ethanol production and use.
Yamada, Takuji; Waller, Alison S; Raes, Jeroen; Zelezniak, Aleksej; Perchat, Nadia; Perret, Alain; Salanoubat, Marcel; Patil, Kiran R; Weissenbach, Jean; Bork, Peer
2012-01-01
Despite the current wealth of sequencing data, one-third of all biochemically characterized metabolic enzymes lack a corresponding gene or protein sequence, and as such can be considered orphan enzymes. They represent a major gap between our molecular and biochemical knowledge, and consequently are not amenable to modern systemic analyses. As 555 of these orphan enzymes have metabolic pathway neighbours, we developed a global framework that utilizes the pathway and (meta)genomic neighbour information to assign candidate sequences to orphan enzymes. For 131 orphan enzymes (37% of those for which (meta)genomic neighbours are available), we associate sequences to them using scoring parameters with an estimated accuracy of 70%, implying functional annotation of 16 345 gene sequences in numerous (meta)genomes. As a case in point, two of these candidate sequences were experimentally validated to encode the predicted activity. In addition, we augmented the currently available genome-scale metabolic models with these new sequence–function associations and were able to expand the models by on average 8%, with a considerable change in the flux connectivity patterns and improved essentiality prediction. PMID:22569339
Mass and momentum turbulent transport experiments with confined swirling coaxial jets
NASA Technical Reports Server (NTRS)
Roback, R.; Johnson, B. V.
1983-01-01
Swirling coaxial jets mixing downstream, discharging into an expanded duct was conducted to obtain data for the evaluation and improvement of turbulent transport models currently used in a variety of computational procedures throughout the combustion community. A combination of laser velocimeter (LV) and laser induced fluorescence (LIF) techniques was employed to obtain mean and fluctuating velocity and concentration distributions which were used to derive mass and momentum turbulent transport parameters currently incorporated into various combustor flow models. Flow visualization techniques were also employed to determine qualitatively the time dependent characteristics of the flow and the scale of turbulence. The results of these measurements indicated that the largest momentum turbulent transport was in the r-z plane. Peak momentum turbulent transport rates were approximately the same as those for the nonswirling flow condition. The mass turbulent transport process for swirling flow was complicated. Mixing occurred in several steps of axial and radial mass transport and was coupled with a large radial mean convective flux. Mixing for swirling flow was completed in one-third the length required for nonswirling flow.
NASA Astrophysics Data System (ADS)
Langot, P.; Montant, S.; Freysz, E.
2000-04-01
In the Born-Oppenheimer approximation and considering a Debye nuclear motion, a theoretical computation of pump-probe two-beam coupling in liquids using femtosecond chirped pulses is proposed. This technique makes it possible to specifically evidence the non-instantaneous contribution to the third-order susceptibility χ(3). Our model, which is an extension at the femtosecond scale of the one proposed by Dogariu et al., describes the temporal evolution of the probe signal as a function of different parameters such as the linear laser chirp, the ratio between the pulse duration and the nuclear response time. Experimentally, this method is applied to characterize the non-instantaneous χ(3) contribution in transparent liquids such as CS 2, benzene and toluene. Time resolved pump-probe coupling data using parallel and perpendicular linear polarizations fit well with the model developed. The experimental ratio R between both fast and slow non-instantaneous χ(3)XXXX and χ(3)XYYX elements of the tensor is equal to 1.33±0.01 in all the liquids studied, and is in good agreement with the expected liquid nuclear symmetry.
One-Dimensional Fokker-Planck Equation with Quadratically Nonlinear Quasilocal Drift
NASA Astrophysics Data System (ADS)
Shapovalov, A. V.
2018-04-01
The Fokker-Planck equation in one-dimensional spacetime with quadratically nonlinear nonlocal drift in the quasilocal approximation is reduced with the help of scaling of the coordinates and time to a partial differential equation with a third derivative in the spatial variable. Determining equations for the symmetries of the reduced equation are derived and the Lie symmetries are found. A group invariant solution having the form of a traveling wave is found. Within the framework of Adomian's iterative method, the first iterations of an approximate solution of the Cauchy problem are obtained. Two illustrative examples of exact solutions are found.
Wilbourn, Mark; Salamonson, Yenna; Ramjan, Lucie; Chang, Sungwon
2018-02-01
The aim of the present study was to develop and test the psychometric properties of the Attitudes, Subjective Norms, Perceived Behavioural Control, and Intention to Pursue a Career in Mental Health Nursing (ASPIRE) scale, an instrument to assess nursing students' intention to work in mental health nursing. Understanding the factors influencing undergraduate nursing students' career intentions might lead to improved recruitment strategies. However, there are no standardized tools to measure and assess students' intention to pursue a career in mental health nursing. The present study used a cross-sectional survey design undertaken at a large tertiary institution in Western Sydney (Australia) between May and August 2013. It comprised three distinct and sequential phases: (i) items were generated representing the four dimensions of the theory of planned behaviour; (ii) face and content validity were tested by a representative reference group and panel of experts; and (iii) survey data from 1109 first- and second-year and 619 third-year students were used in exploratory and confirmatory factor analyses to test the factorial validity of the scale. Internal consistency was measured using Cronbach's alpha. Items generated for the ASPIRE scale were subject to face and content validity testing. Results showed good factorial validity and reliability for the final 14-item scale. Principal axis factoring revealed a one-factor solution, the hypothesized model being supported by confirmatory factor analysis. The ASPIRE scale is a valid and reliable instrument for measuring intention to pursue a career in mental health nursing among Bachelor of Nursing students. © 2017 Australian College of Mental Health Nurses Inc.
Scaling, Similarity, and the Fourth Paradigm for Hydrology
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Clark, Martyn; Samaniego, Luis; Verhoest, Niko E. C.; van Emmerik, Tim; Uijlenhoet, Remko; Achieng, Kevin; Franz, Trenton E.; Woods, Ross
2017-01-01
In this synthesis paper addressing hydrologic scaling and similarity, we posit that roadblocks in the search for universal laws of hydrology are hindered by our focus on computational simulation (the third paradigm), and assert that it is time for hydrology to embrace a fourth paradigm of data-intensive science. Advances in information-based hydrologic science, coupled with an explosion of hydrologic data and advances in parameter estimation and modelling, have laid the foundation for a data-driven framework for scrutinizing hydrological scaling and similarity hypotheses. We summarize important scaling and similarity concepts (hypotheses) that require testing, describe a mutual information framework for testing these hypotheses, describe boundary condition, state flux, and parameter data requirements across scales to support testing these hypotheses, and discuss some challenges to overcome while pursuing the fourth hydrological paradigm. We call upon the hydrologic sciences community to develop a focused effort towards adopting the fourth paradigm and apply this to outstanding challenges in scaling and similarity.
Assessing Individual Weather Risk-Taking and Its Role in Modeling Likelihood of Hurricane Evacuation
NASA Astrophysics Data System (ADS)
Stewart, A. E.
2017-12-01
This research focuses upon measuring an individual's level of perceived risk of different severe and extreme weather conditions using a new self-report measure, the Weather Risk-Taking Scale (WRTS). For 32 severe and extreme situations in which people could perform an unsafe behavior (e. g., remaining outside with lightning striking close by, driving over roadways covered with water, not evacuating ahead of an approaching hurricane, etc.), people rated: 1.their likelihood of performing the behavior, 2. The perceived risk of performing the behavior, 3. the expected benefits of performing the behavior, and 4. whether the behavior has actually been performed in the past. Initial development research with the measure using 246 undergraduate students examined its psychometric properties and found that it was internally consistent (Cronbach's a ranged from .87 to .93 for the four scales) and that the scales possessed good temporal (test-retest) reliability (r's ranged from .84 to .91). A second regression study involving 86 undergraduate students found that taking weather risks was associated with having taken similar risks in one's past and with the personality trait of sensation-seeking. Being more attentive to the weather and perceiving its risks when it became extreme was associated with lower likelihoods of taking weather risks (overall regression model, R2adj = 0.60). A third study involving 334 people examined the contributions of weather risk perceptions and risk-taking in modeling the self-reported likelihood of complying with a recommended evacuation ahead of a hurricane. Here, higher perceptions of hurricane risks and lower perceived benefits of risk-taking along with fear of severe weather and hurricane personal self-efficacy ratings were all statistically significant contributors to the likelihood of evacuating ahead of a hurricane. Psychological rootedness and attachment to one's home also tend to predict lack of evacuation. This research highlights the contributions that a psychological approach can offer in understanding preparations for severe weather. This approach also suggests that a great deal of individual variation exists in weather-protective behaviors, which may explain in part why some people take weather-related risks despite receiving warnings for severe weather.
ERIC Educational Resources Information Center
Karren, Benjamin C.
2017-01-01
The Gilliam Autism Rating Scale-Third Edition (GARS-3) is a norm-referenced tool designed to screen for autism spectrum disorders (ASD) in individuals between the ages of 3 and 22 (Gilliam, 2014). The GARS-3 test kit consists of three different components and includes an "Examiner's Manual," summary/response forms (50), and the…
A synoptic view of the Third Uniform California Earthquake Rupture Forecast (UCERF3)
Field, Edward; Jordan, Thomas H.; Page, Morgan T.; Milner, Kevin R.; Shaw, Bruce E.; Dawson, Timothy E.; Biasi, Glenn; Parsons, Thomas E.; Hardebeck, Jeanne L.; Michael, Andrew J.; Weldon, Ray; Powers, Peter; Johnson, Kaj M.; Zeng, Yuehua; Bird, Peter; Felzer, Karen; van der Elst, Nicholas; Madden, Christopher; Arrowsmith, Ramon; Werner, Maximillan J.; Thatcher, Wayne R.
2017-01-01
Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.
Modeling non-equilibrium mass transport in biologically reactive porous media
NASA Astrophysics Data System (ADS)
Davit, Yohan; Debenest, Gérald; Wood, Brian D.; Quintard, Michel
2010-09-01
We develop a one-equation non-equilibrium model to describe the Darcy-scale transport of a solute undergoing biodegradation in porous media. Most of the mathematical models that describe the macroscale transport in such systems have been developed intuitively on the basis of simple conceptual schemes. There are two problems with such a heuristic analysis. First, it is unclear how much information these models are able to capture; that is, it is not clear what the model's domain of validity is. Second, there is no obvious connection between the macroscale effective parameters and the microscopic processes and parameters. As an alternative, a number of upscaling techniques have been developed to derive the appropriate macroscale equations that are used to describe mass transport and reactions in multiphase media. These approaches have been adapted to the problem of biodegradation in porous media with biofilms, but most of the work has focused on systems that are restricted to small concentration gradients at the microscale. This assumption, referred to as the local mass equilibrium approximation, generally has constraints that are overly restrictive. In this article, we devise a model that does not require the assumption of local mass equilibrium to be valid. In this approach, one instead requires only that, at sufficiently long times, anomalous behaviors of the third and higher spatial moments can be neglected; this, in turn, implies that the macroscopic model is well represented by a convection-dispersion-reaction type equation. This strategy is very much in the spirit of the developments for Taylor dispersion presented by Aris (1956). On the basis of our numerical results, we carefully describe the domain of validity of the model and show that the time-asymptotic constraint may be adhered to even for systems that are not at local mass equilibrium.
1983-01-01
RAI-RI247443 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE i/i UNITE AMUR FOR CONTR.. (U) MILLER RND MILLER INC ORLANDO FL H D MILLER ET RL...LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC PLANTS Report 1: Baseline Studies Volume I...Boyd, J. 1983. "Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants; Report 4, Third Year Poststocking
Impact of Scattering Model on Disdrometer Derived Attenuation Scaling
NASA Technical Reports Server (NTRS)
Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)
2016-01-01
NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.
Impact of Scattering Model on Disdrometer Derived Attenuation Scaling
NASA Technical Reports Server (NTRS)
Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo
2016-01-01
NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP#5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 gigahertz attenuation from the disdrometer and the 20 gigahertz time-series as well as to directly measure the 40 gigahertz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data. In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer-derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.
Kindergarten Predictors of Third Grade Writing
Kim, Young-Suk; Al Otaiba, Stephanie; Wanzek, Jeanne
2015-01-01
The primary goal of the present study was to examine the relations of kindergarten transcription, oral language, word reading, and attention skills to writing skills in third grade. Children (N = 157) were assessed on their letter writing automaticity, spelling, oral language, word reading, and attention in kindergarten. Then, they were assessed on writing in third grade using three writing tasks – one narrative and two expository prompts. Children’s written compositions were evaluated in terms of writing quality (the extent to which ideas were developed and presented in an organized manner). Structural equation modeling showed that kindergarten oral language and lexical literacy skills (i.e., word reading and spelling) were independently predicted third grade narrative writing quality, and kindergarten literacy skill uniquely predicted third grade expository writing quality. In contrast, attention and letter writing automaticity were not directly related to writing quality in either narrative or expository genre. These results are discussed in light of theoretical and practical implications. PMID:25642118
Psychoacoustic Factors in Musical Intonation: Beats, Interval Tuning, and Inharmonicity.
NASA Astrophysics Data System (ADS)
Keislar, Douglas Fleming
Three psychoacoustic experiments were conducted using musically experienced subjects. In the first two experiments, the interval tested was the perfect fifth F4-C5; in the final one it was the major third F4-A4. The beat rate was controlled by two different methods: (1) simply retuning the interval, and (2) frequency-shifting one partial of each pair of beating partials without changing the overall interval tuning. The second method introduces inharmonicity. In addition, two levels of beat amplitude were introduced by using either a complete spectrum of 16 equal-amplitude partials per note, or by deleting one partial from each pair of beating partials. The results of all three experiments indicate that, for these stimuli, beating does not contribute significantly to the percept of "out-of-tuneness," because it made no difference statistically whether the beat amplitude was maximal or minimal. By contrast, mistuning the interval was highly significant. For the fifths, frequency-shifting the appropriate partials had about as much effect on the perceived intonation as mistuning the interval. For thirds, this effect was weaker, presumably since there were fewer inharmonic partials and they were higher in the harmonic series. Subjects were less consistent in their judgments of thirds than of fifths, perhaps because the equal-tempered and just thirds differ noticeably, unlike fifths. Since it is unlikely that beats would be more audible in real musical situations than under these laboratory conditions, these results suggest that the perception of intonation in music is dependent on the actual interval tuning rather than the concomitant beat rate. If beating partials are unimportant vis-a-vis interval tuning, this strengthens the argument for a cultural basis for musical intonation and scales, as opposed to the acoustical basis set forth by Helmholtz and others.
Estimating home-range size: when to include a third dimension?
Monterroso, Pedro; Sillero, Neftalí; Rosalino, Luís Miguel; Loureiro, Filipa; Alves, Paulo Célio
2013-01-01
Most studies dealing with home ranges consider the study areas as if they were totally flat, working only in two dimensions, when in reality they are irregular surfaces displayed in three dimensions. By disregarding the third dimension (i.e., topography), the size of home ranges underestimates the surface actually occupied by the animal, potentially leading to misinterpretations of the animals' ecological needs. We explored the influence of considering the third dimension in the estimation of home-range size by modeling the variation between the planimetric and topographic estimates at several spatial scales. Our results revealed that planimetric approaches underestimate home-range size estimations, which range from nearly zero up to 22%. The difference between planimetric and topographic estimates of home-ranges sizes produced highly robust models using the average slope as the sole independent factor. Moreover, our models suggest that planimetric estimates in areas with an average slope of 16.3° (±0.4) or more will incur in errors ≥5%. Alternatively, the altitudinal range can be used as an indicator of the need to include topography in home-range estimates. Our results confirmed that home-range estimates could be significantly biased when topography is disregarded. We suggest that study areas where home-range studies will be performed should firstly be scoped for its altitudinal range, which can serve as an indicator for the need for posterior use of average slope values to model the surface area used and/or available for the studied animals. PMID:23919170
NASA Technical Reports Server (NTRS)
Newell, Reginald E. (Principal Investigator)
2003-01-01
During the first year we focused on the analysis of data collected on over 7600 commercial aircraft flights (the MOZAIC program). The aim was to further our understanding of the fundamental dynamical processes that drive mesoscale phenomena in the upper troposphere and lower stratosphere, and their effects on the advection of passive scalars. Through these studies we made the following findings. 2001]: We derived the Kolmogorov equation for the third-order velocity structure function on an f-plane. We showed how the sign of the function yields the direction of the energy cascade. The remarkable linearity of the measured off diagonal third-order structure function was studied. We suggested that the Coriolis term, which appears explicitly in this equation, may be crucial in understanding the observed kinetic energy spectra at scales larger than 100 km, instead of the nonlinear advection term as previously assumed. Also, we showed that
Weight-teasing and emotional well-being in adolescents: longitudinal findings from Project EAT.
Eisenberg, Marla E; Neumark-Sztainer, Dianne; Haines, Jess; Wall, Melanie
2006-06-01
To determine if weight-teasing predicts subsequent low self-esteem, poor body image, and depressive symptoms; and to examine two mechanisms through which early teasing may influence later emotional health. A racially and socio-economically diverse sample of 2516 adolescents completed surveys for both Wave 1 (1998-99) and Wave 2 (2003-04) of the Project EAT study. Approximately one third of these were early adolescents who transitioned into middle adolescence, and two thirds were middle adolescents who transitioned into young adulthood. Multiple linear regression analysis was conducted in three stages to test Model A: the total effect of Time 1 teasing on Time 2 emotional health; Model B: Model A, mediated by Time 2 teasing and body mass index (BMI); and Model C: Model B, also mediated by Time 1 emotional health. Approximately one third of males and slightly under half of females reported that they had been teased about their weight at Time 1. Time 1 teasing predicted lower self-esteem, lower body image, and higher depressive symptoms at Time 2 for males and females in the older and younger age groups. This relationship was fully mediated, however, by Time 2 teasing and BMI, and by Time 1 emotional health. Adjusted R2 statistics for the final models ranged from .11 to .36. Weight-teasing in adolescence affects emotional well-being at 5-year follow-up, and appears to function through two mechanisms. Reducing early teasing and its concurrent damages to emotional health may prevent longer-term emotional health consequences.
NASA Astrophysics Data System (ADS)
Chattopadhyay, Utpal; Das, Debottam
2009-02-01
A nonuniversal scalar mass supergravity type of model is explored where the first two generations of scalars and the third generation of sleptons may be very massive. The lighter or vanishing third generation of squarks as well as Higgs scalars at the unification scale cause the radiative electroweak symmetry breaking constraint to be less prohibitive. Thus, both flavor-changing neutral-current/CP-violation problems as well as the naturalness problem are within control. We identify a large slepton mass effect in the renormalization group equations of mHD2 (for the down type of Higgs) that may turn the latter negative at the electroweak scale even for a small tanβ. A hyperbolic branch/focus pointlike effect is found for mA2 that may result in very light Higgs spectra. The lightest stable particle is dominantly a b-ino that pair annihilates via Higgs exchange, giving rise to a Wilkinson Microwave Anisotropy Probe satisfied relic density region for all tanβ. Detection prospects of such lightest stable particles in the upcoming dark matter experiments both of direct and indirect types (photon flux) are interesting. The Higgs bosons and the third generation of squarks are light in this scenario and these may be easily probed besides charginos and neutralinos in the early runs of the Large Hadron Collider.
Nishino, Tomofumi; Ishii, Tomoo; Chang, Fei; Yanai, Takaji; Watanabe, Arata; Ogawa, Takeshi; Mishima, Hajime; Nakai, Kenjiro; Ochiai, Naoyuki
2010-05-01
The purpose of this study was to clarify the effect of gradual weight bearing (GWB) on regenerating cartilage. We developed a novel external fixation device (EFD) with a controllable weight-bearing system and continuous passive motion (CPM). A full-thickness defect was created by resection of the entire articular surface of the tibial plateau after the EFD was fixed in the rabbit's left knee. In the GWB group (n=6), GWB was started 6 weeks after surgery. In the CPM group (n=6), CPM with EFD was applied in the same manner without GWB. The control group (n=5) received only joint distraction. All rabbits were sacrificed 9 weeks after surgery. The central one-third of the regenerated tissue was assessed and scored blindly using a grading scale modified from the International Cartilage Repair Society visual histological assessment scale. The areas stained by Safranin-O and type II collagen antibody were measured, and the percentage of each area was calculated. There was no significant difference in the histological assessment scale among the groups. The percentage of the type II collagen-positive area was significantly larger in the GWB group than in the CPM group. The present study suggests that optimal mechanical stress, such as GWB, may affect regeneration of cartilage, in vivo. Copyright (c) 2009 Orthopaedic Research Society.
NASA Astrophysics Data System (ADS)
Fanget, Alain
2009-06-01
Many authors claim that to understand the response of a propellant, specifically under quasi static and dynamic loading, the mesostructural morphology and the mechanical behaviour of each of its components have to be known. However the scale of the mechanical description of the behaviour of a propellant is relative to its heterogeneities and the wavelength of loading. The shorter it is, the more important the topological description of the material is. In our problems, involving the safety of energetic materials, the propellant can be subjected to a large spectrum of loadings. This presentation is divided into five parts. The first part describes the processes used to extract the information about the morphology of the meso-structure of the material and presents some results. The results, the difficulties and the perspectives for this part will be recalled. The second part determines the physical processes involved at this scale from experimental results. Taking into account the knowledge of the morphology, two ways have been chosen to describe the response of the material. One concerns the quasi static loading, the object of the third part, in which we show how we use the mesoscopic scale as a base of development to build constitutive models. The fourth part presents for low but dynamic loading the comparison between numerical analysis and experiments.
Proceedings of the Conference on the Design of Experiments (23rd) S
1978-07-01
of Statistics, Carnegie-Mellon University. * [12] Duran , B. S . (1976). A survey of nonparametric tests for scale. Comunications in Statistics A5, 1287...the twenty-third Design of Experiments Conference was the U. S . Army Combat Development Experimentation Command, Fort Ord, California. Excellent...Availability Prof. G. E. P. Box Time Series Modelling University of Wisconsin Dr. Churchill Eisenhart was recipient this year of the Samuel S . Wilks Memorial
Cosmography of f(R)-brane cosmology
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Capozziello, Salvatore; Cardone, Vincenzo F.
2010-11-01
Cosmography is a useful tool to constrain cosmological models, in particular, dark energy models. In the case of modified theories of gravity, where the equations of motion are generally quite complicated, cosmography can contribute to select realistic models without imposing arbitrary choices a priori. Indeed, its reliability is based on the assumptions that the universe is homogeneous and isotropic on large scale and luminosity distance can be “tracked” by the derivative series of the scale factor a(t). We apply this approach to induced gravity brane-world models where an f(R) term is present in the brane effective action. The virtue of the model is to self-accelerate the normal and healthy Dvali-Gabadadze-Porrati branch once the f(R) term deviates from the Hilbert-Einstein action. We show that the model, coming from a fundamental theory, is consistent with the ΛCDM scenario at low redshift. We finally estimate the cosmographic parameters fitting the Union2 Type Ia Supernovae data set and the distance priors from baryon acoustic oscillations and then provide constraints on the present day values of f(R) and its second and third derivatives.
Li, Huan; Shang, Xiao-Jun; Dong, Qi-Rong
2015-10-01
To investigate the analgesic and anti-inflammatory effects of transcutaneous electrical nerve stimulation (TENS) at local or distant acupuncture points in a rat model of the third lumbar vertebrae transverse process syndrome. Forty Sprague-Dawley rats were randomly divided into control, model, model plus local acupuncture point stimulation at BL23 (model+LAS) and model plus distant acupuncture point stimulation at ST36 (model+DAS) groups. All rats except controls underwent surgical third lumbar vertebrae transverse process syndrome modelling on day 2. Thereafter, rats in the model+LAS and model+DAS groups were treated daily with TENS for a total of six treatments (2/100 Hz, 30 min/day) from day 16 to day 29. Thermal pain thresholds were measured once a week during treatment and were continued until day 57, when local muscle tissue was sampled for RT-PCR and histopathological examination after haematoxylin and eosin staining. mRNA expression of interleukin-1 β (IL-1β), tumour necrosis factor-α (TNF-α) and inducible nitric oxide synthase (iNOS) was determined. Thermal pain thresholds of all model rats decreased relative to the control group. Both LAS and DAS significantly increased the thermal pain threshold at all but one point during the treatment period. Histopathological assessment revealed that the local muscle tissues around the third lumbar vertebrae transverse process recovered to some degree in both the model+LAS and model+DAS groups; however, LAS appeared to have a greater effect. mRNA expression of IL-1β, TNF-α and iNOS in the local muscle tissues was increased after modelling and attenuated in both model+LAS and model+DAS groups. The beneficial effect was greater after LAS than after DAS. TENS at both local (BL23) and distant (ST36) acupuncture points had a pain-relieving effect in rats with the third lumbar vertebrae transverse process syndrome, and LAS appeared to have greater anti-inflammatory and analgesic effects than DAS. 09073. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
1987-03-01
model is one in which words or numerical descriptions are used to represent an entity or process. An example of a symbolic model is a mathematical ...are the third type of model used in modeling combat attrition. Analytical models are symbolic models which use mathematical symbols and equations to...simplicity and the ease of tracing through the mathematical computations. In this section I will discuss some of the shortcoming which have been
Wong, Risa Liang; Fahs, Deborah Bain; Talwalkar, Jaideep S; Colson, Eve R; Desai, Mayur M; Kayingo, Gerald; Balanda, Matthew; Luczak, Anthony G; Rosenthal, Marjorie S
2016-01-01
Efforts to improve interprofessional education (IPE) are informed by attitudes of health professional students, yet there are limited US data on student characteristics and experiences associated with positive attitudes towards IPE. A cohort of US medical, nursing, and physician associate students was surveyed in their first and third years, using the Readiness for Interprofessional Learning Scale and Interdisciplinary Education Perception Scale. Information was also collected on demographics and experiences during training. Health professional students differed in their attitudes towards IPE; characteristics associated with having more positive attitudes at both time points included being a nursing student, female, older, and having more previous healthcare experience. Students who participated in interprofessional extracurricular activities (particularly patient-based activities) during training reported more positive attitudes in the third year than those who did not participate in such activities. Based on these findings, schools may consider how student characteristics and participation in interprofessional extracurricular activities can affect attitudes regarding IPE. Building on the positive elements of this interprofessional extracurricular experience, schools may also want to consider service-learning models of IPE where students work together on shared goals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkas, R. R.; Foot, R.; He, X.
The universal QCD color theory is extended to an SU(3)/sub 1//direct product/SU(3)/sub 2//direct product/SU(3)/sub 3/ gauge theory, where quarks of the /ital i/th generation transform as triplets under SU(3)/sub /ital i// and singlets under the other two factors. The usual color group is then identified with the diagonal subgroup, which remains exact after symmetry breaking. The gauge bosons associated with the 16 broken generators then form two massive octets under ordinary color. The interactions between quarks and these heavy gluonlike particles are explicitly nonuniversal and thus an exploration of their physical implications allows us to shed light on the fundamentalmore » issue of strong-interaction universality. Nonuniversality and weak flavor mixing are shown to generate heavy-gluon-induced flavor-changing neutral currents. The phenomenology of these processes is studied, as they provide the major experimental constraint on the extended theory. Three symmetry-breaking scenarios are presented. The first has color breaking occurring at the weak scale, while the second and third divorce the two scales. The third model has the interesting feature of radiatively induced off-diagonal Kobayashi-Maskawa matrix elements.« less
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
NASA Astrophysics Data System (ADS)
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.
NASA Astrophysics Data System (ADS)
Niestegge, Gerd
2014-09-01
In quantum mechanics, the selfadjoint Hilbert space operators play a triple role as observables, generators of the dynamical groups and statistical operators defining the mixed states. One might expect that this is typical of Hilbert space quantum mechanics, but it is not. The same triple role occurs for the elements of a certain ordered Banach space in a much more general theory based upon quantum logics and a conditional probability calculus (which is a quantum logical model of the Lueders-von Neumann measurement process). It is shown how positive groups, automorphism groups, Lie algebras and statistical operators emerge from one major postulate - the non-existence of third-order interference (third-order interference and its impossibility in quantum mechanics were discovered by R. Sorkin in 1994). This again underlines the power of the combination of the conditional probability calculus with the postulate that there is no third-order interference. In two earlier papers, its impact on contextuality and nonlocality had already been revealed.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
Larocque, Guy R.; Bhatti, Jagtar S.; Liu, Jinxun; Ascough, James C.; Gordon, Andrew M.
2008-01-01
Many process-based models of carbon (C) and nitrogen (N) cycles have been developed for terrestrial ecosystems, including forest ecosystems. They address many basic issues of ecosystems structure and functioning, such as the role of internal feedback in ecosystem dynamics. The critical factor in these phenomena is scale, as these processes operate at scales from the minute (e.g. particulate pollution impacts on trees and other organisms) to the global (e.g. climate change). Research efforts remain important to improve the capability of such models to better represent the dynamics of terrestrial ecosystems, including the C, nutrient, (e.g. N) and water cycles. Existing models are sufficiently well advanced to help decision makers develop sustainable management policies and planning of terrestrial ecosystems, as they make realistic predictions when used appropriately. However, decision makers must be aware of their limitations by having the opportunity to evaluate the uncertainty associated with process-based models (Smith and Heath, 2001 and Allen et al., 2004). The variation in scale of issues currently being addressed by modelling efforts makes the evaluation of uncertainty a daunting task.
Metabolic rate determines haematopoietic stem cell self-renewal.
Sastry, P S R K
2004-01-01
The number of haematopoietic stem cells (HSCs) per animal is conserved across species. This means the HSCs need to maintain hematopoiesis over a longer period in larger animals. This would result in the requirement of stem cell self-renewal. At present the three existing models are the stochastic model, instructive model and the third more recently proposed is the chiaro-scuro model. It is a well known allometric law that metabolic rate scales to the three quarter power. Larger animals have a lower metabolic rate, compared to smaller animals. Here it is being hypothesized that metabolic rate determines haematopoietic stem cell self-renewal. At lower metabolic rate the stem cells commit for self-renewal, where as at higher metabolic rate they become committed to different lineages. The present hypothesis can explain the salient features of the different models. Recent findings regarding stem cell self-renewal suggest an important role for Wnt proteins and their receptors known as frizzleds, which are an important component of cell signaling pathway. The role of cGMP in the Wnts action provides further justification for the present hypothesis as cGMP is intricately linked to metabolic rate. One can also explain the telomere homeostasis by the present hypothesis. One prediction of the present hypothesis is with reference to the limit of cell divisions known as Hayflick limit, here it is being suggested that this is the result of metabolic rate in laboratory conditions and there can be higher number of cell divisions in vivo if the metabolic rate is lower. Copyright 2004 Elsevier Ltd.
ERIC Educational Resources Information Center
Morrison, James L.; Renfro, William L.
The concepts of long-range planning and strategic planning are explained, and a planning model is proposed. Attention is directed to an environmental scanning model that is congruent with the concept of strategic planning and that emerges from one portion of the futures research community, issues management. A third planning model, the strategic…
NASA Astrophysics Data System (ADS)
Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.
2015-05-01
We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.
A Model for Analyzing Precepting in the Clinical Setting.
ERIC Educational Resources Information Center
Edelstein, Ronald A.
Teaching strategies used by precepters at a hospital-based family medicine center were investigated with seven preceptors who had previous teaching experiences and were board certified (six in family medicine). A third-year senior resident presented and discussed two patient cases to the preceptors in separate one-to-one teaching sessions, and the…
Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows
NASA Astrophysics Data System (ADS)
Xiao, Xudong
Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.
Pesticide fate on catchment scale: conceptual modelling of stream CSIA data
NASA Astrophysics Data System (ADS)
Lutz, Stefanie R.; van der Velde, Ype; Elsayed, Omniea F.; Imfeld, Gwenaël; Lefrancq, Marie; Payraudeau, Sylvain; van Breukelen, Boris M.
2017-10-01
Compound-specific stable isotope analysis (CSIA) has proven beneficial in the characterization of contaminant degradation in groundwater, but it has never been used to assess pesticide transformation on catchment scale. This study presents concentration and carbon CSIA data of the herbicides S-metolachlor and acetochlor from three locations (plot, drain, and catchment outlets) in a 47 ha agricultural catchment (Bas-Rhin, France). Herbicide concentrations at the catchment outlet were highest (62 µg L-1) in response to an intense rainfall event following herbicide application. Increasing δ13C values of S-metolachlor and acetochlor by more than 2 ‰ during the study period indicated herbicide degradation. To assist the interpretation of these data, discharge, concentrations, and δ13C values of S-metolachlor were modelled with a conceptual mathematical model using the transport formulation by travel-time distributions. Testing of different model setups supported the assumption that degradation half-lives (DT50) increase with increasing soil depth, which can be straightforwardly implemented in conceptual models using travel-time distributions. Moreover, model calibration yielded an estimate of a field-integrated isotopic enrichment factor as opposed to laboratory-based assessments of enrichment factors in closed systems. Thirdly, the Rayleigh equation commonly applied in groundwater studies was tested by our model for its potential to quantify degradation on catchment scale. It provided conservative estimates on the extent of degradation as occurred in stream samples. However, largely exceeding the simulated degradation within the entire catchment, these estimates were not representative of overall degradation on catchment scale. The conceptual modelling approach thus enabled us to upscale sample-based CSIA information on degradation to the catchment scale. Overall, this study demonstrates the benefit of combining monitoring and conceptual modelling of concentration and CSIA data and advocates the use of travel-time distributions for assessing pesticide fate and transport on catchment scale.
Phiri, Kristen; Schaefer, Eric; Zhu, Junjia; Kjerulff, Kristen
2016-01-01
Abstract Background: Postpartum depression (PPD) is a common complication of childbearing, but the course of PPD is not well understood. We analyze trajectories of depression and key risk factors associated with these trajectories in the peripartum and postpartum period. Methods: Women in The First Baby Study, a cohort of 3006 women pregnant with their first baby, completed telephone surveys measuring depression during the mother's third trimester, and at 1, 6, and 12 months postpartum. Depression was assessed using the Edinburgh Postnatal Depression Scale. A semiparametric mixture model was used to estimate distinct group-based developmental trajectories of depression and determine whether trajectory group membership varied according to maternal characteristics. Results: A total of 2802 (93%) of mothers completed interviews through 12 months. The mixture model indicated six distinct depression trajectories. A history of anxiety or depression, unattached marital status, and inadequate social support were significantly associated with higher odds of belonging to trajectory groups with greater depression. Most of the depression trajectories were stable or slightly decreased over time, but one depression trajectory, encompassing 1.7% of the mothers, showed women who were nondepressed at the third trimester, but became depressed at 6 months postpartum and were increasingly depressed at 12 months after birth. Conclusions: This trajectory study indicates that women who are depressed during pregnancy tend to remain depressed during the first year postpartum or improve slightly, but an important minority of women become newly and increasingly depressed over the course of the first year after first childbirth. PMID:27310295
Marn, Nina; Klanjscek, Tin; Stokes, Lesley; Jusup, Marko
2015-01-01
Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i) two different regional subsets and (ii) three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications. Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear) model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal.
Efficacy of endoscopic third ventriculostomy in non-communicating hydrocephalus.
ul Haq, Mian Iftikhar; Khan, Shahbaz Ali; Raja, Riaz A; Ahmed, Ehtisham
2012-01-01
Hydrocephalus is common problem requiring either extra-cranial (shunts) or intracranial (ventriculostomy) diversion of cerebrospinal fluid. Endoscopic third ventriculostomy obviates all the complications of shunts and has been accepted as the procedure of choice for the treatment of obstructed hydrocephalus in adults and children because of the minimally invasive nature. This study was conducted to determine the efficacy of endoscopic third ventriculostomy in the treatment of non-communicating hydrocephalus. This cross sectional descriptive study was done in neurosurgery department of Hayatabad Medical Complex, Peshawar, from 2nd February 2011 to 1st march 2012. A total of 171 patients with non-communicating hydrocephalous, irrespective of gender discrimination and Glasgow coma scale score of 10 and above were included in this study. Patients below one year of age, with lesion in the floor of the third ventricle or near basilar artery, and hydrocephalus with infected CSF or haemorrhage were excluded. Hydrocephalous was diagnosed on CT-scan brain. All the patients were followed up till 72 hours post-operatively for the determination of effectiveness in terms of improvement in Glasgow coma scale by at least 2 points. All the above mentioned information including name, age, gender and address were recorded in a predesigned proforma. The data was analysed using SPSS-17. Frequency and percentage was calculated for categorical variables. Mean +/- SD was calculated for age. A total of 171 patients with non-communicating hydrocephalous were included in the study. Out of 171 patients, there were 104 (60.8%) males and 67 (39.2%) females. Age ranged from 1-70 years with majority of the patients was below 10 years of age. Majority of the patients had hydrocephalus due to tuberculous meningitis 39.2% of the whole. In 134 (78.4%) patients the procedure was effective. Procedure was more effective in hydrocephalus due to space occupying lesion. Endoscopic third ventriculostomy is a very effective procedure for the treatment of non-communicating hydrocephalus.
Numerical study of particle deposition and scaling in dust exhaust of cyclone separator
NASA Astrophysics Data System (ADS)
Xu, W. W.; Li, Q.; Zhao, Y. L.; Wang, J. J.; Jin, Y. H.
2016-05-01
The solid particles accumulation in the dust exhaust cone area of the cyclone separator can cause the wall wear. This undoubtedly prevents the flue gas turbine from long period and safe operation. So it is important to study the mechanism how the particles deposited and scale on dust exhaust cone area of the cyclone separator. Numerical simulations of gas-solid flow field have been carried out in a single tube in the third cyclone separator. The three-dimensionally coupled computational fluid dynamic (CFD) technology and the modified Discrete Phase Model (DPM) are adopted to model the gas-solid two-phase flow. The results show that with the increase of the operating temperature and processing capacity, the particle sticking possibility near the cone area will rise. The sticking rates will decrease when the particle diameter becomes bigger.
Lichtenberg, Peter A; Ficker, Lisa J; Rahman-Filipiak, Annalise
2016-01-01
This study examines preliminary evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS), a new person-centered approach to assessing capacity to make financial decisions, and its relationship to self-reported cases of financial exploitation in 69 older African Americans. More than one third of individuals reporting financial exploitation also had questionable decisional abilities. Overall, decisional ability score and current decision total were significantly associated with cognitive screening test and financial ability scores, demonstrating good criterion validity. Study findings suggest that impaired decisional abilities may render older adults more vulnerable to financial exploitation, and that the LFDRS is a valid tool.
Disturbance to desert soil ecosystems contributes to dust-mediated impacts at regional scales
Pointing, Stephen B.; Belnap, Jayne
2014-01-01
This review considers the regional scale of impacts arising from disturbance to desert soil ecosystems. Deserts occupy over one-third of the Earth’s terrestrial surface, and biological soil covers are critical to stabilization of desert soils. Disturbance to these can contribute to massive destabilization and mobilization of dust. This results in dust storms that are transported across inter-continental distances where they have profound negative impacts. Dust deposition at high altitudes causes radiative forcing of snowpack that leads directly to altered hydrological regimes and changes to freshwater biogeochemistry. In marine environments dust deposition impacts phytoplankton diazotrophy, and causes coral reef senescence. Increasingly dust is also recognized as a threat to human health.
ERIC Educational Resources Information Center
Chang, Mei; Paulson, Sharon E.; Finch, W. Holmes; Mcintosh, David E.; Rothlisberg, Barbara A.
2014-01-01
This study examined the underlying constructs measured by the Woodcock-Johnson Tests of Cognitive Abilities, Third Edition (WJ-III COG) and the Stanford-Binet Intelligence Scales, Fifth Edition (SB5), based on the Cattell-Horn-Carrol (CHC) theory of cognitive abilities. This study reports the results of the first joint confirmatory factor analysis…
Links between Children's Clay Models and School Achievement.
ERIC Educational Resources Information Center
Bezruczko, Nikolaus
Two studies examined the relationship between children's clay models and the children's concurrent school achievement, and compared a 6-year longitudinal record of achievement test scores for one cohort of students at schools that did or did not provide visual arts instruction. Participating in Study 1 were 201 kindergartners and third graders…
Two-dimensional vocal tracts with three-dimensional behavior in the numerical generation of vowels.
Arnela, Marc; Guasch, Oriol
2014-01-01
Two-dimensional (2D) numerical simulations of vocal tract acoustics may provide a good balance between the high quality of three-dimensional (3D) finite element approaches and the low computational cost of one-dimensional (1D) techniques. However, 2D models are usually generated by considering the 2D vocal tract as a midsagittal cut of a 3D version, i.e., using the same radius function, wall impedance, glottal flow, and radiation losses as in 3D, which leads to strong discrepancies in the resulting vocal tract transfer functions. In this work, a four step methodology is proposed to match the behavior of 2D simulations with that of 3D vocal tracts with circular cross-sections. First, the 2D vocal tract profile becomes modified to tune the formant locations. Second, the 2D wall impedance is adjusted to fit the formant bandwidths. Third, the 2D glottal flow gets scaled to recover 3D pressure levels. Fourth and last, the 2D radiation model is tuned to match the 3D model following an optimization process. The procedure is tested for vowels /a/, /i/, and /u/ and the obtained results are compared with those of a full 3D simulation, a conventional 2D approach, and a 1D chain matrix model.
Modeling vegetation and carbon dynamics of managed grasslands at the global scale with LPJmL 3.6
NASA Astrophysics Data System (ADS)
Rolinski, Susanne; Müller, Christoph; Heinke, Jens; Weindl, Isabelle; Biewald, Anne; Bodirsky, Benjamin Leon; Bondeau, Alberte; Boons-Prins, Eltje R.; Bouwman, Alexander F.; Leffelaar, Peter A.; te Roller, Johnny A.; Schaphoff, Sibyll; Thonicke, Kirsten
2018-02-01
Grassland management affects the carbon fluxes of one-third of the global land area and is thus an important factor for the global carbon budget. Nonetheless, this aspect has been largely neglected or underrepresented in global carbon cycle models. We investigate four harvesting schemes for the managed grassland implementation of the dynamic global vegetation model (DGVM) Lund-Potsdam-Jena managed Land (LPJmL) that facilitate a better representation of actual management systems globally. We describe the model implementation and analyze simulation results with respect to harvest, net primary productivity and soil carbon content and by evaluating them against reported grass yields in Europe. We demonstrate the importance of accounting for differences in grassland management by assessing potential livestock grazing densities as well as the impacts of grazing, grazing intensities and mowing systems on soil carbon stocks. Grazing leads to soil carbon losses in polar or arid regions even at moderate livestock densities (< 0.4 livestock units per hectare - LSU ha-1) but not in temperate regions even at much higher densities (0.4 to 1.2 LSU ha-1). Applying LPJmL with the new grassland management options enables assessments of the global grassland production and its impact on the terrestrial biogeochemical cycles but requires a global data set on current grassland management.
Facilitation may not be an adequate mechanism of community succession on carrion.
Michaud, Jean-Philippe; Moreau, Gaétan
2017-04-01
The facilitation model of ecological succession was advanced by plant ecologists in the late 1970s and was then introduced to carrion ecology in the late 1980s, without empirical evidence of its applicability. Ecologists in both disciplines proposed removing early colonists, in this case fly eggs and larvae, from the substrate to determine whether other species could still colonize, which to our knowledge has never been attempted. Here, we tested the facilitation model in a carrion system by removing fly eggs and larvae from carcasses that were exposed in agricultural fields and assigned to one of the following treatment levels of removal intensity: 0, <5, 50, and 100%. Subsequent patterns of colonisation did not provide support for the applicability of the facilitation model in carrion systems. Although results showed, in part, that the removal of fly eggs and larvae decreased the decomposition rate of carcasses, the removal did not prevent colonization by secondary colonizers. Finally, we discuss future studies and make recommendations as to how the facilitation model could be improved, firstly by being more specific about the scale where facilitation is believed to be occurring, secondly by clearly stating what environmental modification is believed to be involved, and thirdly by disentangling facilitation from priority effects.
Exact Extremal Statistics in the Classical 1D Coulomb Gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2017-08-01
We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.
Khorshidi Khiavi, Reza; Pourallahverdi, Maghsood; Pourallahverdi, Ayda; Ghorani Khiavi, Saadat; Ghertasi Oskouei, Sina; Mokhtari, Hadi
2010-01-01
The surgical removal of the lower third molars is a procedure generally followed by side effects such as postoperative pain. The aim of this study was to evaluate the efficacy of socket irrigation with an anesthetic solution in relieving pain following impacted third molar surgery. Thirty-four patients (17 males and 17 females), aged 18-24 years, with bilateral impacted lower third molars were selected. Both third molars were extracted in one surgical session. Tooth sockets in each patient were rinsed randomly either with 4 mL of 0.5% bupivacaine hydrochloride plain (without vasoconstrictor) anesthetic solu-tion or 4 mL of normal saline, used as control. The patients were instructed not to use analgesics as long as possible, and if not, they were instructed to use an analgesic, and record the time. Pain severity was assessed using a visual analogue pain scale (VAPS) at 1-, 6-, 12-, and 24-hour intervals post-operatively. Data were analyzed using Pearson's chi-square test and P <0.05 was considered statistically significant. Post-operative pain difference between the two groups was statistically significant at 1-, 6-, 12- and 24-hour post-operative intervals (P <0.05). Post-operative pain increased in both groups to a maximum 12 hours after surgery with signif-icant improvements after that. Based on the results, the irrigation of surgery site with bupivacaine after third molar surgery significantly reduces post-operative pain.