Liu, Jing-Dong; Chung, Pak-Kwong
2017-08-01
The purpose of the current study was to examine the factor structure and measurement invariance of a scale measuring students' perceptions of need-supportive teaching (Need-Supportive Teaching Style Scale in Physical Education; NSTSSPE). We sampled 615 secondary school students in Hong Kong, 200 of whom also completed a follow-up assessment two months later. Factor structure of the scale was examined through exploratory structural equation modeling (ESEM). Further, nomological validity of the NSTSSPE was evaluated by examining the relationships between need-supportive teaching style and student satisfaction of psychological needs. Finally, four measurement models-configural, metric invariance, scalar invariance, and item uniqueness invariance-were assessed using multiple group ESEM to test the measurement invariance of the scale across gender, grade, and time. ESEM results suggested a three-factor structure of the NSTSSPE. Nomological validity was supported, and weak, strong, and strict measurement invariance of the NSTSSPE was evidenced across gender, grade, and time. The current study provides initial psychometric support for the NSTSSPE to assess student perceptions of teachers' need-supportive teaching style in physical education classes.
Engineering-Scale Demonstration of DuraLith and Ceramicrete Waste Forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josephson, Gary B.; Westsik, Joseph H.; Pires, Richard P.
2011-09-23
To support the selection of a waste form for the liquid secondary wastes from the Hanford Waste Immobilization and Treatment Plant, Washington River Protection Solutions (WRPS) has initiated secondary waste form testing on four candidate waste forms. Two of the candidate waste forms have not been developed to scale as the more mature waste forms. This work describes engineering-scale demonstrations conducted on Ceramicrete and DuraLith candidate waste forms. Both candidate waste forms were successfully demonstrated at an engineering scale. A preliminary conceptual design could be prepared for full-scale production of the candidate waste forms. However, both waste forms are stillmore » too immature to support a detailed design. Formulations for each candidate waste form need to be developed so that the material has a longer working time after mixing the liquid and solid constituents together. Formulations optimized based on previous lab studies did not have sufficient working time to support large-scale testing. The engineering-scale testing was successfully completed using modified formulations. Further lab development and parametric studies are needed to optimize formulations with adequate working time and assess the effects of changes in raw materials and process parameters on the final product performance. Studies on effects of mixing intensity on the initial set time of the waste forms are also needed.« less
Predicting Regional Drought on Sub-Seasonal to Decadal Time Scales
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Wang, Hailan; Suarez, Max; Koster, Randal
2011-01-01
Drought occurs on a wide range of time scales, and within a variety of different types of regional climates. It is driven foremost by an extended period of reduced precipitation, but it is the impacts on such quantities as soil moisture, streamflow and crop yields that are often most important from a users perspective. While recognizing that different users have different needs for drought information, it is nevertheless important to understand that progress in predicting drought and satisfying such user needs, largely hinges on our ability to improve predictions of precipitation. This talk reviews our current understanding of the physical mechanisms that drive precipitation variations on subseasonal to decadal time scales, and the implications for predictability and prediction skill. Examples are given highlighting the phenomena and mechanisms controlling precipitation on monthly (e.g., stationary Rossby waves, soil moisture), seasonal (ENSO) and decadal time scales (PD and AMO).
Davoren, Mary; Byrne, Orla; O'Connell, Paul; O'Neill, Helen; O'Reilly, Ken; Kennedy, Harry G
2015-11-23
Patients admitted to a secure forensic hospital are at risk of a long hospital stay. Forensic hospital beds are a scarce and expensive resource and ability to identify the factors predicting length of stay at time of admission would be beneficial. The DUNDRUM-1 triage security scale and DUNDRUM-2 triage urgency scale are designed to assess need for therapeutic security and urgency of that need while the HCR-20 predicts risk of violence. We hypothesized that items on the DUNDRUM-1 and DUNDRUM-2 scales, rated at the time of pre-admission assessment, would predict length of stay in a medium secure forensic hospital setting. This is a prospective study. All admissions to a medium secure forensic hospital setting were collated over a 54 month period (n = 279) and followed up for a total of 66 months. Each patient was rated using the DUNDRUM-1 triage security scale and DUNDRUM-2 triage urgency scale as part of a pre-admission assessment (n = 279) and HCR-20 within 2 weeks of admission (n = 187). Episodes of harm to self, harm to others and episodes of seclusion whilst an in-patient were collated. Date of discharge was noted for each individual. Diagnosis at the time of pre-admission assessment (adjustment disorder v other diagnosis), predicted legal status (sentenced v mental health order) and items on the DUNDRUM-1 triage security scale and the DUNDRUM-2 triage urgency scale, also rated at the time of pre-admission assessment, predicted length of stay in the forensic hospital setting. Need for seclusion following admission also predicted length of stay. These findings may form the basis for a structured professional judgment instrument, rated prior to or at time of admission, to assist in estimating length of stay for forensic patients. Such a tool would be useful to clinicians, service planners and commissioners given the high cost of secure psychiatric care.
Revised Kuppuswamy's Socioeconomic Status Scale: Explained and Updated.
Sharma, Rahul
2017-10-15
Some of the facets of the Kuppuswamy's socioeconomic status scale sometimes create confusion and require explanation on how to classify, and need some minor updates to bring the scale up-to-date. This article provides a revised scale that allows for the real-time update of the scale.
Revised Kuppuswamy's Socioeconomic Status Scale: Explained and Updated.
Sharma, Rahul
2017-08-26
Some of the facets of the Kuppuswamy's socioeconomic status scale sometimes create confusion and require explanation on how to classify, and need some minor updates to bring the scale up-to-date. This article provides a revised scale that allows for the real-time update of the scale.
Perspectives on integrated modeling of transport processes in semiconductor crystal growth
NASA Technical Reports Server (NTRS)
Brown, Robert A.
1992-01-01
The wide range of length and time scales involved in industrial scale solidification processes is demonstrated here by considering the Czochralski process for the growth of large diameter silicon crystals that become the substrate material for modern microelectronic devices. The scales range in time from microseconds to thousands of seconds and in space from microns to meters. The physics and chemistry needed to model processes on these different length scales are reviewed.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
Scanlan, Justin Newton; Hancock, Nicola; Honey, Anne
2018-03-01
There is a need for robust outcome measures for use in psychiatric services. Particularly lacking are self-rated recovery measures with evidence of sensitivity to change. This study was established to examine the convergent validity and sensitivity to change over time (responsiveness) of the Recovery Assessment Scale - Domains and Stages (RAS-DS), in comparison to level of unmet need as measured by the Camberwell Assessment of Need - Short Appraisal Scale (CANSAS). Convergent validity was examined through cross-sectional correlations between 540 CANSAS and RAS-DS scores collected on the same day for the same individuals. Sensitivity to change was examined using correlations between change scores in CANSAS and RAS-DS where both were collected on the same day and the two time points were separated by 90 days or more (n = 498). Results demonstrated moderate, significant cross-sectional correlations between CANSAS scores and RAS-DS total and domain scores and between change scores of both instruments. Results suggest that the RAS-DS is sensitive enough to detect change over time. Only moderate correlation between the RAS-DS and CANSAS suggests that, in the context of recovery-oriented service provision, it is important to measure self-reported recovery in addition to level of unmet needs. Copyright © 2018 Elsevier B.V. All rights reserved.
Investigation of Calcium Sulfate’s Contribution to Chemical Off Flavor in Baked Items
2013-09-30
including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing and...studies if any calcium additive is needed . If shelf life and texture are not adversely effected it may prove to be a cost savings to eliminate...point Quality scale to assess the overall aroma and flavor quality. The 9-point Quality scale is based on the Hedonic scale developed by David Peryam and
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.
Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun
2013-06-01
Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.
Decadal-Scale Forecasting of Climate Drivers for Marine Applications.
Salinger, J; Hobday, A J; Matear, R J; O'Kane, T J; Risbey, J S; Dunstan, P; Eveson, J P; Fulton, E A; Feng, M; Plagányi, É E; Poloczanska, E S; Marshall, A G; Thompson, P A
Climate influences marine ecosystems on a range of time scales, from weather-scale (days) through to climate-scale (hundreds of years). Understanding of interannual to decadal climate variability and impacts on marine industries has received less attention. Predictability up to 10 years ahead may come from large-scale climate modes in the ocean that can persist over these time scales. In Australia the key drivers of climate variability affecting the marine environment are the Southern Annular Mode, the Indian Ocean Dipole, the El Niño/Southern Oscillation, and the Interdecadal Pacific Oscillation, each has phases that are associated with different ocean circulation patterns and regional environmental variables. The roles of these drivers are illustrated with three case studies of extreme events-a marine heatwave in Western Australia, a coral bleaching of the Great Barrier Reef, and flooding in Queensland. Statistical and dynamical approaches are described to generate forecasts of climate drivers that can subsequently be translated to useful information for marine end users making decisions at these time scales. Considerable investment is still needed to support decadal forecasting including improvement of ocean-atmosphere models, enhancement of observing systems on all scales to support initiation of forecasting models, collection of important biological data, and integration of forecasts into decision support tools. Collaboration between forecast developers and marine resource sectors-fisheries, aquaculture, tourism, biodiversity management, infrastructure-is needed to support forecast-based tactical and strategic decisions that reduce environmental risk over annual to decadal time scales. © 2016 Elsevier Ltd. All rights reserved.
Multiple Time Series Node Synchronization Utilizing Ambient Reference
2014-12-31
assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The severe requirements that Special...processing targeted to performance assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The...research community and it is well documented and characterized. The datasets considered from this project (listed below) were used to derive the
Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B
2011-01-01
In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less
Gaps in knowledge and data driving uncertainty in models of photosynthesis.
Dietze, Michael C
2014-02-01
Regional and global models of the terrestrial biosphere depend critically on models of photosynthesis when predicting impacts of global change. This paper focuses on identifying the primary data needs of these models, what scales drive uncertainty, and how to improve measurements. Overall, there is a need for an open, cross-discipline database on leaf-level photosynthesis in general, and response curves in particular. The parameters in photosynthetic models are not constant through time, space, or canopy position but there is a need for a better understanding of whether relationships with drivers, such as leaf nitrogen, are themselves scale dependent. Across time scales, as ecosystem models become more sophisticated in their representations of succession they needs to be able to approximate sunfleck responses to capture understory growth and survival. At both high and low latitudes, photosynthetic data are inadequate in general and there is a particular need to better understand thermal acclimation. Simple models of acclimation suggest that shifts in optimal temperature are important. However, there is little advantage to synoptic-scale responses and circadian rhythms may be more beneficial than acclimation over shorter timescales. At high latitudes, there is a need for a better understanding of low-temperature photosynthetic limits, while at low latitudes the need is for a better understanding of phosphorus limitations on photosynthesis. In terms of sampling, measuring multivariate photosynthetic response surfaces are potentially more efficient and more accurate than traditional univariate response curves. Finally, there is a need for greater community involvement in model validation and model-data synthesis.
In recent years the applications of regional air quality models are continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physic...
Ultrafast studies of shock induced chemistry-scaling down the size by turning up the heat
NASA Astrophysics Data System (ADS)
McGrane, Shawn
2015-06-01
We will discuss recent progress in measuring time dependent shock induced chemistry on picosecond time scales. Data on the shock induced chemistry of liquids observed through picosecond interferometric and spectroscopic measurements will be reconciled with shock induced chemistry observed on orders of magnitude larger time and length scales from plate impact experiments reported in the literature. While some materials exhibit chemistry consistent with simple thermal models, other materials, like nitromethane, seem to have more complex behavior. More detailed measurements of chemistry and temperature across a broad range of shock conditions, and therefore time and length scales, will be needed to achieve a real understanding of shock induced chemistry, and we will discuss efforts and opportunities in this direction.
NASA Technical Reports Server (NTRS)
Silva, P. M.; Silva, I. M.
1974-01-01
Various methods presently used for the dissemination of time at several levels of precision are described along with future projects in the field. Different aspects of time coordination are reviewed and a list of future laboratories participating in a National Time Scale will be presented. A Brazilian Atomic Time Scale will be obtained from as many of these laboratories as possible. The problem of intercomparison between the Brazilian National Time Scale and the International one will be presented and probable solutions will be discussed. Needs related to the TV Line-10 method will be explained and comments will be made on the legal aspects of time dissemination throughout the country.
A k-epsilon modeling of near wall turbulence
NASA Technical Reports Server (NTRS)
Yang, Z.; Shih, T. H.
1991-01-01
A k-epsilon model is proposed for turbulent bounded flows. In this model, the turbulent velocity scale and turbulent time scale are used to define the eddy viscosity. The time scale is shown to be bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using the time scale, removing the need to introduce the pseudo-dissipation. A damping function is chosen such that the shear stress satisfies the near wall asymptotic behavior. The model constants used are the same as the model constants in the commonly used high turbulent Reynolds number k-epsilon model. Fully developed turbulent channel flows and turbulent boundary layer flows over a flat plate at various Reynolds numbers are used to validate the model. The model predictions were found to be in good agreement with the direct numerical simulation data.
Climate Information Responding to User Needs (CIRUN)
NASA Astrophysics Data System (ADS)
Busalacchi, A. J.
2009-05-01
For the past several decades many different US agencies have been involved in collecting Earth observations, e.g., NASA, NOAA, DoD, USGS, USDA. More recently, the US has led the international effort to design a Global Earth Observation System of Systems (GEOSS). Yet, there has been little substantive progress at the synthesis and integration of the various research and operational, space-based and in situ, observations. Similarly, access to such a range of observations across the atmosphere, ocean, and land surface remains fragmented. With respect to prediction of the Earth System, the US has not developed a comprehensive strategy. For climate, the US (e.g., NOAA, NASA, DoE) has taken a two-track strategy. At the more immediate time scale, coupled ocean-atmosphere models of the physical climate system have built upon the tradition of daily numerical weather prediction in order to extend the forecast window to seasonal to interannual times scales. At the century time scale, the nascent development of Earth System models, combining components of the physical climate system with biogeochemical cycles, are being used to provide future climate change projections in response to anticipated greenhouse gas forcings. Between these to two approaches to prediction lies a key deficiency of interest to decision makers, especially as it pertains to adaptation, i.e., deterministic prediction of the Earth System at time scales from days to decades with spatial scales from global to regional. One of many obstacles to be overcome is the design of present day observation and prediction products based on user needs. To date, most of such products have evolved from the technology and research "push" rather than the user or stakeholder "pull". In the future as planning proceeds for a national climate service, emphasis must be given to a more coordinated approach in which stakeholders' needs help design future Earth System observational and prediction products, and similarly, such products need to be tailored to provide decision support.
Drought and Heat Waves: The Role of SST and Land Surface Feedbacks
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2011-01-01
Drought occurs on a wide range of time scales, and within a variety of different types of regional climates. At the shortest time scales it is often associated with heat waves that last only several weeks to a few months but nevertheless can have profound detrimental impacts on society (e.g., heat-related impacts on human health, desiccation of croplands, increased fire hazard), while at the longest time scales it can extend over decades and can lead to long term structural changes in many aspects of society (e.g., agriculture, water resources, wetlands, tourism, population shifts). There is now considerable evidence that sea surface temperatures (SSTs) play a leading role in the development of drought world-wide, especially at seasonal and longer time scales, though land-atmosphere feedbacks can also play an important role. At shorter (subseasonal) time scales, SSTs are less important, but land feedbacks can play a critical role in maintaining and amplifying the atmospheric conditions associated with heat waves and short-term droughts. This talk reviews our current understanding of the physical mechanisms that drive precipitation and temperature variations on subseasonal to centennial time scales. This includes an assessment of predictability, prediction skill, and user needs at all time scales.
Large Eddy Simulation in the Computation of Jet Noise
NASA Technical Reports Server (NTRS)
Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.
1999-01-01
Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
Brédart, Anne; Kop, Jean-Luc; Fiszer, Chavie; Sigal-Zafrani, Brigitte; Dolbeault, Sylvie
2015-12-01
Information is a care priority in most breast cancer survivors (BCS). We assessed whether BCS information needs at 8 months after hospital cancer treatment could be related to their age, education level, perceived medical communication competence, satisfaction with care, attachment style, and self-esteem. Of 426 BCS approached during the last week of treatment (T1), 85% completed the Medical Communication Competence Scale, European Organisation for Research and Treatment of Cancer Satisfaction with Care Questionnaire, Rosenberg's Self-Esteem Scale and Experiences in Close Relationships Scale. The Hospital Anxiety and Depression Scale and the Supportive Care Needs Survey were completed at T1 and again 8 months later (T2) with a 66% (n = 283) response rate. Baseline respondents' median (range) age was 56 years (23-86 years). Information needs decreased over time, although some persisted. Multivariate regression analyses evidenced overall higher information needs at T2 in younger BCS and in those dissatisfied with the information provided at T1. Specifically, in younger BCS, higher information needs were related to lower satisfaction with doctors' availability, and in older BCS, they were related to higher self-perceived competence in information giving, lower self-perceived competence in information seeking, and lower satisfaction with doctors' information provision. Psychological distress was strongly related to information needs. Education, BCS attachment style, and self-esteem were not associated with information needs. In order to enhance supportive care for BCS, younger BCS should be provided with more time to address all their concerns and older BCS should be encouraged to express their specific desires for information. Copyright © 2015 John Wiley & Sons, Ltd.
A Systematic Multi-Time Scale Solution for Regional Power Grid Operation
NASA Astrophysics Data System (ADS)
Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.
2017-10-01
Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.
Dahlberg, Jerry; Tkacik, Peter T; Mullany, Brigid; Fleischhauer, Eric; Shahinian, Hossein; Azimi, Farzad; Navare, Jayesh; Owen, Spencer; Bisel, Tucker; Martin, Tony; Sholar, Jodie; Keanini, Russell G
2017-12-04
An analog, macroscopic method for studying molecular-scale hydrodynamic processes in dense gases and liquids is described. The technique applies a standard fluid dynamic diagnostic, particle image velocimetry (PIV), to measure: i) velocities of individual particles (grains), extant on short, grain-collision time-scales, ii) velocities of systems of particles, on both short collision-time- and long, continuum-flow-time-scales, iii) collective hydrodynamic modes known to exist in dense molecular fluids, and iv) short- and long-time-scale velocity autocorrelation functions, central to understanding particle-scale dynamics in strongly interacting, dense fluid systems. The basic system is composed of an imaging system, light source, vibrational sensors, vibrational system with a known media, and PIV and analysis software. Required experimental measurements and an outline of the theoretical tools needed when using the analog technique to study molecular-scale hydrodynamic processes are highlighted. The proposed technique provides a relatively straightforward alternative to photonic and neutron beam scattering methods traditionally used in molecular hydrodynamic studies.
Engineering web maps with gradual content zoom based on streaming vector data
NASA Astrophysics Data System (ADS)
Huang, Lina; Meijers, Martijn; Šuba, Radan; van Oosterom, Peter
2016-04-01
Vario-scale data structures have been designed to support gradual content zoom and the progressive transfer of vector data, for use with arbitrary map scales. The focus to date has been on the server side, especially on how to convert geographic data into the proposed vario-scale structures by means of automated generalisation. This paper contributes to the ongoing vario-scale research by focusing on the client side and communication, particularly on how this works in a web-services setting. It is claimed that these functionalities are urgently needed, as many web-based applications, both desktop and mobile, require gradual content zoom, progressive transfer and a high performance level. The web-client prototypes developed in this paper make it possible to assess the behaviour of vario-scale data and to determine how users will actually see the interactions. Several different options of web-services communication architectures are possible in a vario-scale setting. These options are analysed and tested with various web-client prototypes, with respect to functionality, ease of implementation and performance (amount of transmitted data and response times). We show that the vario-scale data structure can fit in with current web-based architectures and efforts to standardise map distribution on the internet. However, to maximise the benefits of vario-scale data, a client needs to be aware of this structure. When a client needs a map to be refined (by means of a gradual content zoom operation), only the 'missing' data will be requested. This data will be sent incrementally to the client from a server. In this way, the amount of data transferred at one time is reduced, shortening the transmission time. In addition to these conceptual architecture aspects, there are many implementation and tooling design decisions at play. These will also be elaborated on in this paper. Based on the experiments conducted, we conclude that the vario-scale approach indeed supports gradual content zoom and the progressive web transfer of vector data. This is a big step forward in making vector data at arbitrary map scales available to larger user groups.
NASA Astrophysics Data System (ADS)
Pelissetto, Andrea; Rossini, Davide; Vicari, Ettore
2018-03-01
We investigate the quantum dynamics of many-body systems subject to local (i.e., restricted to a limited space region) time-dependent perturbations. If the system crosses a quantum phase transition, an off-equilibrium behavior is observed, even for a very slow driving. We show that, close to the transition, time-dependent quantities obey scaling laws. In first-order transitions, the scaling behavior is universal, and some scaling functions can be computed exactly. For continuous transitions, the scaling laws are controlled by the standard critical exponents and by the renormalization-group dimension of the perturbation at the transition. Our protocol can be implemented in existing relatively small quantum simulators, paving the way for a quantitative probe of the universal off-equilibrium scaling behavior, without the need to manipulate systems close to the thermodynamic limit.
Development of a low energy electron spectrometer for SCOPE
NASA Astrophysics Data System (ADS)
Tominaga, Yuu; Saito, Yoshifumi; Yokota, Shoichiro
We are newly developing a low-energy charged particle analyzer for the future satellite mission SCOPE (cross Scale COupling in the Plasma universE). The main purpose of the mission is to understand the cross scale coupling between macroscopic MHD scale phenomena and microscopic ion and electron-scale phenomena. In order to under-stand the dynamics of plasma in small scales, we need to observe the plasma with an analyzer which has high time resolution. For ion-scale phenomena, the time resolution must be as high as ion cyclotron frequency (-10 sec) in Earth's magnetosphere. However, for electron-scale phe-nomena, the time resolution must be as high as electron cyclotron frequency (-1 msec). The GEOTAIL satellite that observes Earth's magnetosphere has the analyzer whose time resolution is 12 sec, so the satellite can observe ion-scale phenomena. However in the SCOPE mission, we will go further to observe electron-scale phenomena. Then we need analyzers that have at least several msec time resolution. Besides, we need to make the analyzer as small as possible for the volume and weight restrictions of the satellite. The diameter of the top-hat analyzer must be smaller than 20 cm. In this study, we are developing an electrostatic analyzer that meets such requirements using numerical simulations. The electrostatic analyzer is a spherical/toroidal top-hat electrostatic analyzer with three nested spherical/toroidal deflectors. Using these deflectors, the analyzer measures charged particles simultaneously in two different energy ranges. Therefore time res-olution of the analyzer can be doubled. With the analyzer, we will measure energies from 10 eV to 22.5 keV. In order to obtain three-dimensional distribution functions of low energy parti-cles, the analyzer must have 4-pi str field of view. Conventional electrostatic analyzers use the spacecraft spin to have 4-pi field of view. So the time resolution of the analyzer depends on the spin frequency of the spacecraft. However, we cannot secure the several msec time resolution by using the spacecraft spin. In the SCOPE mission, we set 8 pairs of two nested electrostatic analyzers on each side of the spacecraft, which enable us to secure 4-pi field of view altogether. Then the time resolution of the analyzer does not depend on the spacecraft spin. Given that the sampling time of the analyzer is 0.5 msec, the time resolution of the analyzer can be 8 msec. In order to secure the time resolution as high as 10 msec, the geometric factor of the analyzer has to be as high as 8*10-3 (cm2 str eV/eV/22.5deg). Higher geometric factor requires bigger instrument. However, we have to reduce the volume and weight of the instrument to set it on the satellite. Under these restrictions, we have realized the analyzer which has the geometric factors of 7.5*10-3 (cm2 str eV/eV/22.5deg) (inner sphere) and 10.0*10-3 (cm2 str eV/eV/22.5deg) (outer sphere) with diameter of 17.4 cm.
Where to put things? Spatial land management to sustain biodiversity and economic returns
Expanding human population and economic growth have led to large-scale conversion of natural habitat to human-dominated landscapes with consequent large-scale declines in biodiversity. Conserving biodiversity, while at the same time meeting expanding human needs, is an issue of u...
Delarosa, Elizabeth; Horner, Stephanie; Eisenberg, Casey; Ball, Laura; Renzoni, Anne Marie; Ryan, Stephen E
2012-09-01
Young people use augmentative and alternative communication (AAC) systems to meet their everyday communication needs. However, the successful integration of an AAC system into a child's life requires strong commitment and continuous support from parents and other family members. This article describes the development and evaluation of the Family Impact of Assistive Technology Scale for AAC Systems - a parent-report questionnaire intended to detect the impact of AAC systems on the lives of children with complex communication needs and their families. The study involved 179 parents and clinical experts to test the content and face validities of the questionnaire, demonstrate its internal reliability and stability over time, and estimate its convergent construct validity when compared to a standardized measure of family impact.
The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.
The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less
The RATIO method for time-resolved Laue crystallography
Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya
2009-01-01
A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334
NASA Astrophysics Data System (ADS)
Gros, Claudius
2017-11-01
Modern societies face the challenge that the time scale of opinion formation is continuously accelerating in contrast to the time scale of political decision making. With the latter remaining of the order of the election cycle we examine here the case that the political state of a society is determined by the continuously evolving values of the electorate. Given this assumption we show that the time lags inherent in the election cycle will inevitable lead to political instabilities for advanced democracies characterized both by an accelerating pace of opinion dynamics and by high sensibilities (political correctness) to deviations from mainstream values. Our result is based on the observation that dynamical systems become generically unstable whenever time delays become comparable to the time it takes to adapt to the steady state. The time needed to recover from external shocks grows in addition dramatically close to the transition. Our estimates for the order of magnitude of the involved time scales indicate that socio-political instabilities may develop once the aggregate time scale for the evolution of the political values of the electorate falls below 7-15 months.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coppens, Philip; Fournier, Bertrand
Here, the need for data-scaling has become increasingly evident as time-resolved pump-probe photocrystallography is rapidly developing at high intensity X-ray sources. Several aspects of the scaling of data sets collected at synchrotrons, XFELs (X-ray Free Electron Lasers) and high-intensity pulsed electron sources are discussed. They include laser-ON/laser-OFF data scaling, inter- and intra-data set scaling. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
NASA Technical Reports Server (NTRS)
Waight, Kenneth T., III; Zack, John W.; Karyampudi, V. Mohan
1989-01-01
Initial simulations of the June 28, 1986 Cooperative Huntsville Meteorological Experiment case illustrate the need for mesoscale moisture information in a summertime situation in which deep convection is organized by weak large scale forcing. A methodology is presented for enhancing the initial moisture field from a combination of IR satellite imagery, surface-based cloud observations, and manually digitized radar data. The Mesoscale Atmospheric Simulation Model is utilized to simulate the events of June 28-29. This procedure insures that areas known to have precipitation at the time of initialization will be nearly saturated on the grid scale, which should decrease the time needed by the model to produce the observed Bonnie (a relatively weak hurricane that moved on shore two days before) convection. This method will also result in an initial distribution of model cloudiness (transmissivity) that is very similar to that of the IR satellite image.
A space-time multifractal analysis on radar rainfall sequences from central Poland
NASA Astrophysics Data System (ADS)
Licznar, Paweł; Deidda, Roberto
2014-05-01
Rainfall downscaling belongs to most important tasks of modern hydrology. Especially from the perspective of urban hydrology there is real need for development of practical tools for possible rainfall scenarios generation. Rainfall scenarios of fine temporal scale reaching single minutes are indispensable as inputs for hydrological models. Assumption of probabilistic philosophy of drainage systems design and functioning leads to widespread application of hydrodynamic models in engineering practice. However models like these covering large areas could not be supplied with only uncorrelated point-rainfall time series. They should be rather supplied with space time rainfall scenarios displaying statistical properties of local natural rainfall fields. Implementation of a Space-Time Rainfall (STRAIN) model for hydrometeorological applications in Polish conditions, such as rainfall downscaling from the large scales of meteorological models to the scale of interest for rainfall-runoff processes is the long-distance aim of our research. As an introduction part of our study we verify the veracity of the following STRAIN model assumptions: rainfall fields are isotropic and statistically homogeneous in space; self-similarity holds (so that, after having rescaled the time by the advection velocity, rainfall is a fully homogeneous and isotropic process in the space-time domain); statistical properties of rainfall are characterized by an "a priori" known multifractal behavior. We conduct a space-time multifractal analysis on radar rainfall sequences selected from the Polish national radar system POLRAD. Radar rainfall sequences covering the area of 256 km x 256 km of original 2 km x 2 km spatial resolution and 15 minutes temporal resolution are used as study material. Attention is mainly focused on most severe summer convective rainfalls. It is shown that space-time rainfall can be considered with a good approximation to be a self-similar multifractal process. Multifractal analysis is carried out assuming Taylor's hypothesis to hold and the advection velocity needed to rescale the time dimension is assumed to be equal about 16 km/h. This assumption is verified by the analysis of autocorrelation functions along the x and y directions of "rainfall cubes" and along the time axis rescaled with assumed advection velocity. In general for analyzed rainfall sequences scaling is observed for spatial scales ranging from 4 to 256 km and for timescales from 15 min to 16 hours. However in most cases scaling break is identified for spatial scales between 4 and 8, corresponding to spatial dimensions of 16 km to 32 km. It is assumed that the scaling break occurrence at these particular scales in central Poland conditions could be at least partly explained by the rainfall mesoscale gap (on the edge of meso-gamma, storm-scale and meso-beta scale).
Moessner, Anne; Malec, James F; Beveridge, Scott; Reddy, Cara Camiolo; Huffman, Tracy; Marton, Julia; Schmerzler, Audrey J
2016-01-01
To develop and provide initial validation of a measure for accurately determining the need for Constant Visual Observation (CVO) in patients with traumatic brain injury (TBI) admitted to inpatient rehabilitation. Rating scale development and evaluation through Rasch analysis and assessment of concurrent validity. One hundred and thirty-four individuals with moderate-severe TBI were studied in seven inpatient brain rehabilitation units associated with the National Institute for Disability, Independent Living and Rehabilitation Research (NIDILRR) TBI Model System. Participants were rated on the preliminary version of the CVO Needs Assessment scale (CVONA) and, by independent raters, on the Levels of Risk (LoR) and Supervision Rating Scale (SRS) at four time points during inpatient rehabilitation: admission, Days 2-3, Days 5-6 and Days 8-9. After pruning misfitting items, the CVONA showed satisfactory internal consistency (Person Reliability = 0.85-0.88) across time points. With reference to the LoR and SRS, low false negative rates (sensitivity > 90%) were associated with moderate-to-high false positive rates (29-56%). The CVONA may be a useful objective metric to complement clinical judgement regarding the need for CVO; however, further prospective study is desirable to further assess its utility in identifying at-risk patients, reducing adverse events and decreasing CVO costs.
[A new scale for measuring return-to-work motivation of mentally ill employees].
Poersch, M
2007-03-01
A new scale "motivation for return to work" has been constructed to measure depressive patients' motivation to start working again in a stepwise process. The scale showed in 46 patients of a first case management (CM) sample with depressive employees a good correlation with the final social status of the CM. Only the motivated patients were successful returning to work and could be, separated clearly from the most demotivated one. Second, the scale correlated with the duration of sick leave and third showed an inverse correlation with the complete time of CM, suggesting that a successful stepwise return to work requires time. These first results need further examination.
Enabling fast charging - Infrastructure and economic considerations
NASA Astrophysics Data System (ADS)
Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas; Francfort, James; Michelbacher, Christopher; Carlson, Richard B.; Zhang, Jiucai; Vijayagopal, Ram; Dias, Fernando; Mohanpurkar, Manish; Scoffield, Don; Hardy, Keith; Shirk, Matthew; Hovsapian, Rob; Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.; Keyser, Matthew; Kreuzer, Cory; Markel, Anthony; Meintz, Andrew; Pesaran, Ahmad; Tanim, Tanvir R.
2017-11-01
The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehicle service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. This discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging at 400 kW and above. In so doing, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.
Enabling fast charging – Infrastructure and economic considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas
The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. This discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging at 400 kW and above. In so doing, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less
Enabling fast charging – Infrastructure and economic considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas
The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. Here, this discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging up to 350 kW. In doing so, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less
Enabling fast charging – Infrastructure and economic considerations
Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas; ...
2017-10-23
The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. Here, this discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging up to 350 kW. In doing so, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less
Asymptotic scaling properties and estimation of the generalized Hurst exponents in financial data
NASA Astrophysics Data System (ADS)
Buonocore, R. J.; Aste, T.; Di Matteo, T.
2017-04-01
We propose a method to measure the Hurst exponents of financial time series. The scaling of the absolute moments against the aggregation horizon of real financial processes and of both uniscaling and multiscaling synthetic processes converges asymptotically towards linearity in log-log scale. In light of this we found appropriate a modification of the usual scaling equation via the introduction of a filter function. We devised a measurement procedure which takes into account the presence of the filter function without the need of directly estimating it. We verified that the method is unbiased within the errors by applying it to synthetic time series with known scaling properties. Finally we show an application to empirical financial time series where we fit the measured scaling exponents via a second or a fourth degree polynomial, which, because of theoretical constraints, have respectively only one and two degrees of freedom. We found that on our data set there is not clear preference between the second or fourth degree polynomial. Moreover the study of the filter functions of each time series shows common patterns of convergence depending on the momentum degree.
Equation-of-State Scaling Factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-28
Equation-of-State scaling factors are needed when using a tabular EOS in which the user de ned material isotopic fractions di er from the actual isotopic fractions used by the table. Additionally, if a material is dynamically changing its isotopic structure, then an EOS scaling will again be needed, and will vary in time and location. The procedure that allows use of a table to obtain information about a similar material with average atomic mass Ms and average atomic number Zs is described below. The procedure is exact for a fully ionized ideal gas. However, if the atomic number is replacemore » by the e ective ionization state the procedure can be applied to partially ionized material as well, which extends the applicability of the scaling approximation continuously from low to high temperatures.« less
Impact of basin scale and time-weighted mercury metrics on intra-/inter-basin mercury comparisons
Paul Bradley; Mark E. Brigham
2016-01-01
Understanding anthropogenic and environmental controls on fluvial Mercury (Hg) bioaccumulation over global and national gradients can be challenging due to the need to integrate discrete-sample results from numerous small scale investigations. Two fundamental issues for such integrative Hg assessments are the wide range of basin scales for included studies and how well...
rTMS: A Treatment to Restore Function After Severe TBI
2016-10-01
of rTMS-induced neurobehavioral effects measured with the Disability Rating Scale. Aim II will determine the presence, direction and sustainability...Aim IV addresses the need to confirm rTMS safety for severe TBI. 15. SUBJECT TERMS Disability Rating Scale (DRS), Neurobehavioral, Repetitive...rTMS sessions. The Disability Rating Scale (DRS) will be used at four time points to measure neurobehavioral recovery slopes. Net neural effects
On the scaling of multicrystal data sets collected at high-intensity X-ray and electron sources
Coppens, Philip; Fournier, Bertrand
2015-11-11
Here, the need for data-scaling has become increasingly evident as time-resolved pump-probe photocrystallography is rapidly developing at high intensity X-ray sources. Several aspects of the scaling of data sets collected at synchrotrons, XFELs (X-ray Free Electron Lasers) and high-intensity pulsed electron sources are discussed. They include laser-ON/laser-OFF data scaling, inter- and intra-data set scaling. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and geographic information systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated in ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and Geographic Information Systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Optimal satellite sampling to resolve global-scale dynamics in the I-T system
NASA Astrophysics Data System (ADS)
Rowland, D. E.; Zesta, E.; Connor, H. K.; Pfaff, R. F., Jr.
2016-12-01
The recent Decadal Survey highlighted the need for multipoint measurements of ion-neutral coupling processes to study the pathways by which solar wind energy drives dynamics in the I-T system. The emphasis in the Decadal Survey is on global-scale dynamics and processes, and in particular, mission concepts making use of multiple identical spacecraft in low earth orbit were considered for the GDC and DYNAMIC missions. This presentation will provide quantitative assessments of the optimal spacecraft sampling needed to significantly advance our knowledge of I-T dynamics on the global scale.We will examine storm time and quiet time conditions as simulated by global circulation models, and determine how well various candidate satellite constellations and satellite schemes can quantify the plasma and neutral convection patterns and global-scale distributions of plasma density, neutral density, and composition, and their response to changes in the IMF. While the global circulation models are data-starved, and do not contain all the physics that we might expect to observe with a global-scale constellation mission, they are nonetheless an excellent "starting point" for discussions of the implementation of such a mission. The result will be of great utility for the design of future missions, such as GDC, to study the global-scale dynamics of the I-T system.
2016-05-24
experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been studied over the...is to obtain high-fidelity experimental data critically needed to validate research codes at relevant conditions, and to develop systematic and...validated with experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been
Singular Perturbations and Time-Scale Methods in Control Theory: Survey 1976-1982.
1982-12-01
established in the 1960s, when they first became a means for simplified computation of optimal trajectories. It was soon recognized that singular...null-space of P(ao). The asymptotic values of the invariant zeros and associated invariant-zero directions as € O are the values computed from the...49 ’ 49 7. WEAK COUPLING AND TIME SCALES The need for model simplification with a reduction (or distribution) of computational effort is
The comparability of different survey designs needs to be established to facilitate integration of data across scales and interpretation of trends over time. Probability-based survey designs are now being investigated to allow condition to be assessed at the watershed scale, an...
Molecular dynamics at low time resolution.
Faccioli, P
2010-10-28
The internal dynamics of macromolecular systems is characterized by widely separated time scales, ranging from fraction of picoseconds to nanoseconds. In ordinary molecular dynamics simulations, the elementary time step Δt used to integrate the equation of motion needs to be chosen much smaller of the shortest time scale in order not to cut-off physical effects. We show that in systems obeying the overdamped Langevin equation, it is possible to systematically correct for such discretization errors. This is done by analytically averaging out the fast molecular dynamics which occurs at time scales smaller than Δt, using a renormalization group based technique. Such a procedure gives raise to a time-dependent calculable correction to the diffusion coefficient. The resulting effective Langevin equation describes by construction the same long-time dynamics, but has a lower time resolution power, hence it can be integrated using larger time steps Δt. We illustrate and validate this method by studying the diffusion of a point-particle in a one-dimensional toy model and the denaturation of a protein.
The detection of local irreversibility in time series based on segmentation
NASA Astrophysics Data System (ADS)
Teng, Yue; Shang, Pengjian
2018-06-01
We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.
DL-sQUAL: A Multiple-Item Scale for Measuring Service Quality of Online Distance Learning Programs
ERIC Educational Resources Information Center
Shaik, Naj; Lowe, Sue; Pinegar, Kem
2006-01-01
Education is a service with multiplicity of student interactions over time and across multiple touch points. Quality teaching needs to be supplemented by consistent quality supporting services for programs to succeed under the competitive distance learning landscape. ServQual and e-SQ scales have been proposed for measuring quality of traditional…
Partnering for Improvement: "Communities of Practice and Their Role in Scale Up." Conference Paper
ERIC Educational Resources Information Center
Cannata, Marisa; Cohen-Vogel, Lora; Sorum, Michael
2015-01-01
The past several decades have seen a substantial amount of time, resources, and expertise focused on producing sustainable improvement in schools at scale. Research on these efforts have highlighted how complex this challenge is, as it needs to attend to building teacher support and participation, aligning with the organizational context, and…
Atomistic and coarse-grained computer simulations of raft-like lipid mixtures.
Pandit, Sagar A; Scott, H Larry
2007-01-01
Computer modeling can provide insights into the existence, structure, size, and thermodynamic stability of localized raft-like regions in membranes. However, the challenges in the construction and simulation of accurate models of heterogeneous membranes are great. The primary obstacle in modeling the lateral organization within a membrane is the relatively slow lateral diffusion rate for lipid molecules. Microsecond or longer time-scales are needed to fully model the formation and stability of a raft in a membra ne. Atomistic simulations currently are not able to reach this scale, but they do provide quantitative information on the intermolecular forces and correlations that are involved in lateral organization. In this chapter, the steps needed to carry out and analyze atomistic simulations of hydrated lipid bilayers having heterogeneous composition are outlined. It is then shown how the data from a molecular dynamics simulation can be used to construct a coarse-grained model for the heterogeneous bilayer that can predict the lateral organization and stability of rafts at up to millisecond time-scales.
Test-retest reliability and sensitivity to change of the dimensional anxiety scales for DSM-5.
Knappe, Susanne; Klotsche, Jens; Heyde, Franziska; Hiob, Sarah; Siegert, Jens; Hoyer, Jürgen; Strobel, Anja; LeBeau, Richard T; Craske, Michelle G; Wittchen, Hans-Ulrich; Beesdo-Baum, Katja
2014-06-01
This article reports on the test-retest reliability and sensitivity to change of a set of brief dimensional self-rating questionnaires for social anxiety disorder (SAD-D), specific phobia (SP-D), agoraphobia (AG-D), panic disorder (PD-D), and generalized anxiety disorder (GAD-D), as well as a general cross-cutting anxiety scale (Cross-D), which were developed to supplement categorical diagnoses in the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5). The German versions of the dimensional anxiety scales were administered to 218 students followed up approximately 2 weeks later (Study 1) and 55 outpatients (23 with anxiety diagnoses) followed-up 1 year later (Study 2). Probable diagnostic status in students was determined by the DIA-X/M-CIDI stem screening-questionnaire (SSQ). In the clinical sample, Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) diagnoses were assessed at Time 1 using the DIA-X/M-CIDI. At Time 2, the patient-version of the Clinical Global Impression-Improvement scale (CGI-I) was applied to assess change. Good psychometric properties, including high test-retest reliability, were found for the dimensional scales except for SP-D. In outpatients, improvement at Time 2 was associated with significant decrease in PD-D, GAD-D, and Cross-D scores. Discussion Major advantages of the scales include that they are brief, concise, and based on a consistent template to measure the cognitive, physiological, and behavioral symptoms of fear and anxiety. Further replication in larger samples is needed. Given its modest psychometric properties, SP-D needs refinement. Increasing evidence from diverse samples suggests clinical utility of the dimensional anxiety scales.
Howard, Matt C; Jayne, Bradley S
2015-03-01
Cyberpsychology is a recently emergent field that examines the impact of technology upon human cognition and behavior. Given its infancy, authors have rapidly created new measures to gauge their constructs of interest. Unfortunately, few of these authors have had the opportunity to test their scales' psychometric properties and validity. This is concerning, as many theoretical assumptions may be founded upon scales with inadequate attributes. If this were found to be true, then previous findings in cyberpsychology studies would need to be retested, and future research would need to shift its focus to creating psychometrically sound and valid measures. To provide inferences on this concern, the current study examines the article reporting, scale creation, and scale reliabilities of every article published in Cyberpsychology, Behavior, and Social Networking from its inception to July 2014. The final data set encompassed the coding of 1,478 individual articles, including 921 scales, and spanning 17 years. The results demonstrate that the simple survey methodology has become more popular over time. Authors are gradually applying empirically tested scales. However, self-created measures are still the most popular, leading to concerns about the measures' validity. Also, the use of multi-item measures has increased over time, but many articles still fail to report adequate information to assess the reliability of the applied scales. Lastly, the average scale reliability is 0.81, which barely meets standard cutoffs. Overall, these results are not overly concerning, but suggestions are given on methods to improve the reporting of measures, the creation of scales, and the state of cyberpsychology.
Eisen, Lars; Eisen, Rebecca J
2007-12-01
Improved methods for collection and presentation of spatial epidemiologic data are needed for vectorborne diseases in the United States. Lack of reliable data for probable pathogen exposure site has emerged as a major obstacle to the development of predictive spatial risk models. Although plague case investigations can serve as a model for how to ideally generate needed information, this comprehensive approach is cost-prohibitive for more common and less severe diseases. New methods are urgently needed to determine probable pathogen exposure sites that will yield reliable results while taking into account economic and time constraints of the public health system and attending physicians. Recent data demonstrate the need for a change from use of the county spatial unit for presentation of incidence of vectorborne diseases to more precise ZIP code or census tract scales. Such fine-scale spatial risk patterns can be communicated to the public and medical community through Web-mapping approaches.
Near-Neighbor Algorithms for Processing Bearing Data
1989-05-10
neighbor algorithms need not be universally more cost -effective than brute force methods. While the data access time of near-neighbor techniques scales with...the number of objects N better than brute force, the cost of setting up the data structure could scale worse than (Continues) 20...for the near neighbors NN2 1 (i). Depending on the particular NN algorithm, the cost of accessing near neighbors for each ai E S1 scales as either N
Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild
Broell, Franziska; Taggart, Christopher T.
2015-01-01
This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming ‘efficiently’, is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF) and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40), and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time) in the wild. PMID:26673777
Takeura, Hisashi
2011-02-01
Medical institutes need to prepare for earthquake or severe disasters which may happen at some future date and need to take countermeasures for those situations. Also our laboratory must do the same things. New medical center will be open in March, 2011. At the same time, this hospital will be registered as the one of the centers which contend with disasters to follow the infrastructure outline of disaster caring hospitals of Osaka prefecture.
The 2-MEV Model: Constancy of Adolescent Environmental Values within an 8-Year Time Frame
ERIC Educational Resources Information Center
Bogner, F. X.; Johnson, B.; Buxner, S.; Felix, L.
2015-01-01
The 2-MEV model is a widely used tool to monitor children's environmental perception by scoring individual values. Although the scale's validity has been confirmed repeatedly and independently as well as the scale is in usage within more than two dozen language units all over the world, longitudinal properties still need clarification. The purpose…
Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts
2015-01-01
Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...
Simulating recurrent event data with hazard functions defined on a total time scale.
Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald
2015-03-08
In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.
Challenges to Progress in Studies of Climate-Tectonic-Erosion Interactions
NASA Astrophysics Data System (ADS)
Burbank, D. W.
2016-12-01
Attempts to unravel the relative importance of climate and tectonics in modulating topography and erosion should compare relevant data sets at comparable temporal and spatial scales. Given that such data are uncommonly available, how can we compare diverse data sets in a robust fashion? Many erosion-rate studies rely on detrital cosmogenic nuclides. What time scales can such data address, and what landscape conditions do they require to provide accurate representations of long-term erosion rates? To what extent do large-scale, but infrequent erosional events impact long-term rates? Commonly, long-term erosion rates are deduced from thermochronologic data. What types of data are needed to test for consistency of rates across a given interval or change in rates through time? Similarly, spatial and temporal variability in precipitation or tectonics requires averaging across appropriate scales. How are such data obtained in deforming mountain belts, and how do we assess their reliability? This study describes the character and temporal duration of key variables that are needed to examine climate-tectonic-erosion interactions, explores the strengths and weaknesses of several study areas, and suggests the types of data requirements that will underpin enlightening "tests" of hypotheses related to the mutual impacts of climate, tectonics, and erosion.
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
The Vision Is Set, Now Help Chronicle the Change
ERIC Educational Resources Information Center
Woodin, Terry; Feser, Jason; Herrera, Jose
2012-01-01
The Vision and Change effort to explore and implement needed changes in undergraduate biology education has been ongoing since 2006. It is now time to take stock of changes that have occurred at the faculty and single-course levels, and to consider how to accomplish the larger-scale changes needed at departmental and institutional levels. This…
Wanyenze, Rhoda; Alamo, Stella; Kwarisiima, Dalsone; Sunday, Pamela; Sebikaari, Gloria; Kamya, Moses; Wabwire-Mangen, Fred; Wagner, Glenn
2010-01-01
Abstract Global scale-up of antiretroviral therapy (ART) has focused on clinical outcomes with little attention on its impact on existing health systems. In June–August 2008, we conducted a formative evaluation on ART scale-up and clinic operations at three clinics in Uganda to generate lessons for informing policy and larger public health care systems. Site visits and semistructured interviews with 10 ART clients and 6 providers at each clinic were used to examine efficiency of clinic operations (patient flow, staff allocation to appropriate duties, scheduling of clinic visits, record management) and quality of care (attending to both client and provider needs, and providing support for treatment adherence and retention). Clients reported long waiting times but otherwise general satisfaction with the quality of care. Providers reported good patient adherence and retention, and support mechanisms for clients. Like clients, providers mentioned long waiting times and high workload as major challenges to clinic expansion. Providers called for more human resources and stress-release mechanisms to prevent staff burnout. Both providers and clients perceive these clinics to be delivering good quality care, despite the recognition of congested clinics and long waiting times. These findings highlight the need to address clinic efficiency as well as support for providers in the context of rapid scale-up. PMID:21034243
de Jonge, Jan; van der Linden, Sjaak; Schaufeli, Wilmar; Peter, Richard; Siegrist, Johannes
2008-01-01
Key measures of Siegrist's (1996) Effort-Reward Imbalance (ERI) Model (i.e., efforts, rewards, and overcommitment) were psychometrically tested. To study change in organizational interventions, knowledge about the type of change underlying the instruments used is needed. Next to assessing baseline factorial validity and reliability, the factorial stability over time - known as alpha-beta-gamma change - of the ERI scales was examined. Psychometrics were tested among 383 and 267 healthcare workers from two Dutch panel surveys with different time lags. Baseline results favored a five-factor model (i.e., efforts, esteem rewards, financial/career-related aspects, job security, and overcommitment) over and above a three-factor solution (i.e., efforts, composite rewards, and overcommitment). Considering changes as a whole, particularly the factor loadings of the three ERI scales were not equal over time. Findings suggest in general that moderate changes in the ERI factor structure did not affect the interpretation of mean changes over time. Occupational health researchers utilizing the ERI scales can feel confident that self-reported changes are more likely to be due to factors other than structural change of the ERI scales over time, which has important implications for evaluating job stress and health interventions.
Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert
The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less
Unravelling connections between river flow and large-scale climate: experiences from Europe
NASA Astrophysics Data System (ADS)
Hannah, D. M.; Kingston, D. G.; Lavers, D.; Stagge, J. H.; Tallaksen, L. M.
2016-12-01
The United Nations has identified better knowledge of large-scale water cycle processes as essential for socio-economic development and global water-food-energy security. In this context, and given the ever-growing concerns about climate change/ variability and human impacts on hydrology, there is an urgent research need: (a) to quantify space-time variability in regional river flow, and (b) to improve hydroclimatological understanding of climate-flow connections as a basis for identifying current and future water-related issues. In this paper, we draw together studies undertaken at the pan-European scale: (1) to evaluate current methods for assessing space-time dynamics for different streamflow metrics (annual regimes, low flows and high flows) and for linking flow variability to atmospheric drivers (circulation indices, air-masses, gridded climate fields and vapour flux); and (2) to propose a plan for future research connecting streamflow and the atmospheric conditions in Europe and elsewhere. We believe this research makes a useful, unique contribution to the literature through a systematic inter-comparison of different streamflow metrics and atmospheric descriptors. In our findings, we highlight the need to consider appropriate atmospheric descriptors (dependent on the target flow metric and region of interest) and to develop analytical techniques that best characterise connections in the ocean-atmosphere-land surface process chain. We call for the need to consider not only atmospheric interactions, but also the role of the river basin-scale terrestrial hydrological processes in modifying the climate signal response of river flows.
A. Townsend Peterson; Daniel A. Kluza
2005-01-01
Large-scale assessments of the distribution and diversity of birds have been challenged by the need for a robust methodology for summarizing or predicting species' geographic distributions (e.g. Beard et al. 1999, Manel et al. 1999, Saveraid et al. 2001). Methodologies used in such studies have at times been inappropriate, or even more frequently limited in their...
ERIC Educational Resources Information Center
Frey, Andreas; Hartig, Johannes; Rupp, Andre A.
2009-01-01
In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…
Exact axially symmetric galactic dynamos
NASA Astrophysics Data System (ADS)
Henriksen, R. N.; Woodfinden, A.; Irwin, J. A.
2018-05-01
We give a selection of exact dynamos in axial symmetry on a galactic scale. These include some steady examples, at least one of which is wholly analytic in terms of simple functions and has been discussed elsewhere. Most solutions are found in terms of special functions, such as associated Lagrange or hypergeometric functions. They may be considered exact in the sense that they are known to any desired accuracy in principle. The new aspect developed here is to present scale-invariant solutions with zero resistivity that are self-similar in time. The time dependence is either a power law or an exponential factor, but since the geometry of the solution is self-similar in time we do not need to fix a time to study it. Several examples are discussed. Our results demonstrate (without the need to invoke any other mechanisms) X-shaped magnetic fields and (axially symmetric) magnetic spiral arms (both of which are well observed and documented) and predict reversing rotation measures in galaxy haloes (now observed in the CHANG-ES sample) as well as the fact that planar magnetic spirals are lifted into the galactic halo.
NASA Astrophysics Data System (ADS)
Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.
2015-08-01
Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.
Dodd, Marylin J.; Cho, Maria H.; Miaskowski, Christine; Painter, Patricia L.; Paul, Steven M.; Cooper, Bruce A.; Duda, John; Krasnoff, Joanne; Bank, Kayee A.
2010-01-01
Background Few studies have evaluated an individualized home-based exercise prescription during and after cancer treatment. Objective The purpose was to evaluate the effectiveness of a home-based exercise training intervention, the PRO-SELF FATIGUE CONTROL PROGRAM on the management of cancer related fatigue. Interventions/Methods Participants (N=119) were randomized into one of three groups: Group 1 (EE) received the exercise prescription throughout the study; Group 2 (CE) received their exercise prescription after completing cancer treatment; Group 3 (CC) received usual care. Patients completed the Piper Fatigue Scale, General Sleep Disturbance Scale, Center for Epidemiological Studies-Depression scale, and Worst Pain Intensity Scale. Results All groups reported mild fatigue levels, sleep disturbance and mild pain, but not depression. Using multilevel regression analysis significant linear and quadratic trends were found for change in fatigue and pain (i.e., scores increased, then decreased over time). No group differences were found in the changing scores over time. A significant quadratic effect for the trajectory of sleep disturbance was found, but no group differences were detected over time. No significant time or group effects were found for depression. Conclusions Our home-based exercise intervention had no effect on fatigue or related symptoms associated with cancer treatment. The optimal timing of exercise remains to be determined. Implications for practice Clinicians need to be aware that some physical activity is better than none, and there is no harm in exercise as tolerated during cancer treatment. Further analysis is needed to examine the adherence to exercise. More frequent assessments of fatigue, sleep disturbance, depression, and pain may capture the effect of exercise. PMID:20467301
Late-time cosmological phase transitions
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.
Questionnaires Measuring Patients’ Spiritual Needs: A Narrative Literature Review
Seddigh, Ruohollah; Keshavarz-Akhlaghi, Amir-Abbas; Azarnik, Somayeh
2016-01-01
Context The objective of the present review was to collect published spiritual needs questionnaires and to present a clear image of the research condition of this domain. Evidence Acquisition First, an electronic search was conducted with no limits on time span (until June 2015) or language in the following databases: PubMed, Scopus, Ovid, ProQuest and Google Scholar. All derivations of the keywords religion and spiritual alongside need and its synonyms were included in the search. Researches that introduced new tools was then selected and included in the study. Due to the limited quantity of questionnaires in this domain and with no consideration given to the existence or lack of exact standardization information, all of the questionnaires were included in the final report. Results Eight questionnaires were found: patients spiritual needs assessment scale (PSNAS), spiritual needs inventory (SNI), spiritual interests related to illness tool (SpIRIT), spiritual needs questionnaire (SpNQ), spiritual needs assessment for patients (SNAP), spiritual needs scale (SNS), spiritual care needs inventory (SCNI), and spiritual needs questionnaire for palliative care. Conclusions These questionnaires have been designed from a limited medical perspective and often involve cultural concepts which complicate their cross-cultural applicability. PMID:27284281
Lagrangian Statistics and Intermittency in Gulf of Mexico.
Lin, Liru; Zhuang, Wei; Huang, Yongxiang
2017-12-12
Due to the nonlinear interaction between different flow patterns, for instance, ocean current, meso-scale eddies, waves, etc, the movement of ocean is extremely complex, where a multiscale statistics is then relevant. In this work, a high time-resolution velocity with a time step 15 minutes obtained by the Lagrangian drifter deployed in the Gulf of Mexico (GoM) from July 2012 to October 2012 is considered. The measured Lagrangian velocity correlation function shows a strong daily cycle due to the diurnal tidal cycle. The estimated Fourier power spectrum E(f) implies a dual-power-law behavior which is separated by the daily cycle. The corresponding scaling exponents are close to -1.75 and -2.75 respectively for the time scale larger (resp. 0.1 ≤ f ≤ 0.4 day -1 ) and smaller (resp. 2 ≤ f ≤ 8 day -1 ) than 1 day. A Hilbert-based approach is then applied to this data set to identify the possible multifractal property of the cascade process. The results show an intermittent dynamics for the time scale larger than 1 day, while a less intermittent dynamics for the time scale smaller than 1 day. It is speculated that the energy is partially injected via the diurnal tidal movement and then transferred to larger and small scales through a complex cascade process, which needs more studies in the near future.
System-level view of geospace dynamics: Challenges for high-latitude ground-based observations
NASA Astrophysics Data System (ADS)
Donovan, E.
2014-12-01
Increasingly, research programs including GEM, CEDAR, GEMSIS, GO Canada, and others are focusing on how geospace works as a system. Coupling sits at the heart of system level dynamics. In all cases, coupling is accomplished via fundamental processes such as reconnection and plasma waves, and can be between regions, energy ranges, species, scales, and energy reservoirs. Three views of geospace are required to attack system level questions. First, we must observe the fundamental processes that accomplish the coupling. This "observatory view" requires in situ measurements by satellite-borne instruments or remote sensing from powerful well-instrumented ground-based observatories organized around, for example, Incoherent Scatter Radars. Second, we need to see how this coupling is controlled and what it accomplishes. This demands quantitative observations of the system elements that are being coupled. This "multi-scale view" is accomplished by networks of ground-based instruments, and by global imaging from space. Third, if we take geospace as a whole, the system is too complicated, so at the top level we need time series of simple quantities such as indices that capture important aspects of the system level dynamics. This requires a "key parameter view" that is typically provided through indices such as AE and DsT. With the launch of MMS, and ongoing missions such as THEMIS, Cluster, Swarm, RBSP, and ePOP, we are entering a-once-in-a-lifetime epoch with a remarkable fleet of satellites probing processes at key regions throughout geospace, so the observatory view is secure. With a few exceptions, our key parameter view provides what we need. The multi-scale view, however, is compromised by space/time scales that are important but under-sampled, combined extent of coverage and resolution that falls short of what we need, and inadequate conjugate observations. In this talk, I present an overview of what we need for taking system level research to its next level, and how high latitude ground based observations can address these challenges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. G. Little
1999-03-01
The Idaho National Engineering and Environmental Laboratory (INEEL), through the US Department of Energy (DOE), has proposed that a large-scale wind test facility (LSWTF) be constructed to study, in full-scale, the behavior of low-rise structures under simulated extreme wind conditions. To determine the need for, and potential benefits of, such a facility, the Idaho Operations Office of the DOE requested that the National Research Council (NRC) perform an independent assessment of the role and potential value of an LSWTF in the overall context of wind engineering research. The NRC established the Committee to Review the Need for a Large-scale Testmore » Facility for Research on the Effects of Extreme Winds on Structures, under the auspices of the Board on Infrastructure and the Constructed Environment, to perform this assessment. This report conveys the results of the committee's deliberations as well as its findings and recommendations. Data developed at large-scale would enhanced the understanding of how structures, particularly light-frame structures, are affected by extreme winds (e.g., hurricanes, tornadoes, sever thunderstorms, and other events). With a large-scale wind test facility, full-sized structures, such as site-built or manufactured housing and small commercial or industrial buildings, could be tested under a range of wind conditions in a controlled, repeatable environment. At this time, the US has no facility specifically constructed for this purpose. During the course of this study, the committee was confronted by three difficult questions: (1) does the lack of a facility equate to a need for the facility? (2) is need alone sufficient justification for the construction of a facility? and (3) would the benefits derived from information produced in an LSWTF justify the costs of producing that information? The committee's evaluation of the need and justification for an LSWTF was shaped by these realities.« less
NASA Astrophysics Data System (ADS)
Luce, C. H.; Buffington, J. M.; Rieman, B. E.; Dunham, J. B.; McKean, J. A.; Thurow, R. F.; Gutierrez-Teira, B.; Rosenberger, A. E.
2005-05-01
Conservation and restoration of freshwater stream and river habitats are important goals for land management and natural resources research. Several examples of research have emerged showing that many species are adapted to temporary habitat disruptions, but that these adaptations are sensitive to the spatial grain and extent of disturbance as well as to its duration. When viewed from this perspective, questions of timing, spatial pattern, and relevant scales emerge as critical issues. In contrast, much regulation, management, and research remains tied to pollutant loading paradigms that are insensitive to either time or space scales. It is becoming clear that research is needed to examine questions and hypotheses about how physical processes affect ecological processes. Two overarching questions concisely frame the scientific issues: 1) How do we quantify physical watershed processes in a way that is meaningful to biological and ecological processes, and 2) how does the answer to that question vary with changing spatial and temporal scales? A joint understanding of scaling characteristics of physical process and the plasticity of aquatic species will be needed to accomplish this research; hence a strong need exists for integrative and collaborative development. Considering conservation biology problems in this fashion can lead to creative and non-obvious solutions because the integrated system has important non-linearities and feedbacks related to a biological system that has responded to substantial natural variability in the past. We propose that research beginning with ecological theories and principles followed with a structured examination of each physical process as related to the specific ecological theories is a strong approach to developing the necessary science, and such an approach may form a basis for development of scaling theories of hydrologic and geomorphic process. We illustrate the approach with several examples.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
Meert, Kathleen L; Templin, Thomas N; Michelson, Kelly N; Morrison, Wynne E; Hackbarth, Richard; Custer, Joseph R; Schim, Stephanie M; Briller, Sherylyn H; Thurston, Celia S
2012-11-01
To evaluate the reliability and validity of the Bereaved Parent Needs Assessment, a new instrument to measure parents' needs and need fulfillment around the time of their child's death in the pediatric intensive care unit. We hypothesized that need fulfillment would be negatively related to complicated grief and positively related to quality of life during bereavement. Cross-sectional survey. Five U.S. children's hospital pediatric intensive care units. Parents (n = 121) bereaved in a pediatric intensive care unit 6 months earlier. Surveys included the 68-item Bereaved Parent Needs Assessment, the Inventory of Complicated Grief, and the abbreviated version of the World Health Organization Quality of Life questionnaire. Each Bereaved Parent Needs Assessment item described a potential need and was rated on two scales: 1) a 5-point rating of importance (1 = not at all important, 5 = very important) and 2) a 5-point rating of fulfillment (1 = not at all met, 5 = completely met). Three composite scales were computed: 1) total importance (percentage of all needs rated ≥4 for importance), 2) total fulfillment (percentage of all needs rated ≥4 for fulfillment), and 3) percent fulfillment (percentage of important needs that were fulfilled). Internal consistency reliability was assessed by Cronbach's α and Spearman-Brown-corrected split-half reliability. Generalized estimating equations were used to test predictions between composite scales and the Inventory of Complicated Grief and World Health Organization Quality of Life questionnaire. Two items had mean importance ratings <3, and 55 had mean ratings >4. Reliability of composite scores ranged from 0.92 to 0.94. Total fulfillment was negatively correlated with Inventory of Complicated Grief (r = -.29; p < .01) and positively correlated with World Health Organization Quality of Life questionnaire (r = .21; p < .05). Percent fulfillment was also significantly correlated with both outcomes. Adjusting for parent's age, education, and loss of an only child, percent fulfillment remained significantly correlated with Inventory of Complicated Grief but not with World Health Organization Quality of Life questionnaire. The Bereaved Parent Needs Assessment demonstrated reliability and validity to assess the needs of parents bereaved in the pediatric intensive care unit. Meeting parents' needs around the time of their child's death may promote adjustment to loss.
Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it
2012-12-01
A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less
Analytic model to estimate thermonuclear neutron yield in z-pinches using the magnetic Noh problem
NASA Astrophysics Data System (ADS)
Allen, Robert C.
The objective was to build a model which could be used to estimate neutron yield in pulsed z-pinch experiments, benchmark future z-pinch simulation tools and to assist scaling for breakeven systems. To accomplish this, a recent solution to the magnetic Noh problem was utilized which incorporates a self-similar solution with cylindrical symmetry and azimuthal magnetic field (Velikovich, 2012). The self-similar solution provides the conditions needed to calculate the time dependent implosion dynamics from which batch burn is assumed and used to calculate neutron yield. The solution to the model is presented. The ion densities and time scales fix the initial mass and implosion velocity, providing estimates of the experimental results given specific initial conditions. Agreement is shown with experimental data (Coverdale, 2007). A parameter sweep was done to find the neutron yield, implosion velocity and gain for a range of densities and time scales for DD reactions and a curve fit was done to predict the scaling as a function of preshock conditions.
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
Solar Environmental Disturbances
2007-11-02
like stars were examined, extending the previous 7–12 year time series to 13–20 years by combining Strömgren b, y photometry from Lowell Observatory...per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...explanations for how these physical processes affect the production of solar activity, both on short and long time scales. Solar cycle variation
Slow speed—fast motion: time-lapse recordings in physics education
NASA Astrophysics Data System (ADS)
Vollmer, Michael; Möllmann, Klaus-Peter
2018-05-01
Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s-1, allowing us to study transient physics phenomena happening too fast for the naked eye. Here we want to extend the range of phenomena which may be studied by video analysis in the opposite direction by focusing on much longer time scales ranging from minutes, hours to many days or even months. We discuss this time-lapse method, needed equipment and give a few hints of how to produce respective recordings for two specific experiments.
Competition between Primary Nucleation and Autocatalysis in Amyloid Fibril Self-Assembly
Eden, Kym; Morris, Ryan; Gillam, Jay; MacPhee, Cait E.; Allen, Rosalind J.
2015-01-01
Kinetic measurements of the self-assembly of proteins into amyloid fibrils are often used to make inferences about molecular mechanisms. In particular, the lag time—the quiescent period before aggregates are detected—is often found to scale with the protein concentration as a power law, whose exponent has been used to infer the presence or absence of autocatalytic growth processes such as fibril fragmentation. Here we show that experimental data for lag time versus protein concentration can show signs of kinks: clear changes in scaling exponent, indicating changes in the dominant molecular mechanism determining the lag time. Classical models for the kinetics of fibril assembly suggest that at least two mechanisms are at play during the lag time: primary nucleation and autocatalytic growth. Using computer simulations and theoretical calculations, we investigate whether the competition between these two processes can account for the kinks which we observe in our and others’ experimental data. We derive theoretical conditions for the crossover between nucleation-dominated and growth-dominated regimes, and analyze their dependence on system volume and autocatalysis mechanism. Comparing these predictions to the data, we find that the experimentally observed kinks cannot be explained by a simple crossover between nucleation-dominated and autocatalytic growth regimes. Our results show that existing kinetic models fail to explain detailed features of lag time versus concentration curves, suggesting that new mechanistic understanding is needed. More broadly, our work demonstrates that care is needed in interpreting lag-time scaling exponents from protein assembly data. PMID:25650930
Need for Improved Methods to Collect and Present Spatial Epidemiologic Data for Vectorborne Diseases
Eisen, Rebecca J.
2007-01-01
Improved methods for collection and presentation of spatial epidemiologic data are needed for vectorborne diseases in the United States. Lack of reliable data for probable pathogen exposure site has emerged as a major obstacle to the development of predictive spatial risk models. Although plague case investigations can serve as a model for how to ideally generate needed information, this comprehensive approach is cost-prohibitive for more common and less severe diseases. New methods are urgently needed to determine probable pathogen exposure sites that will yield reliable results while taking into account economic and time constraints of the public health system and attending physicians. Recent data demonstrate the need for a change from use of the county spatial unit for presentation of incidence of vectorborne diseases to more precise ZIP code or census tract scales. Such fine-scale spatial risk patterns can be communicated to the public and medical community through Web-mapping approaches. PMID:18258029
Price, S A; Schmitz, L
2016-04-05
Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. © 2016 The Author(s).
Schmitz, L.
2016-01-01
Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. PMID:26977068
Messier: A Detailed NVM-Based DIMM Model for the SST Simulation Framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awad, Amro; Voskuilen, Gwendolyn Renae; Rodrigues, Arun F.
2017-02-01
DRAM technology is the main building block of main memory, however, DRAM scaling is becoming very challenging. The main issues for DRAM scaling are the increasing error rates with each new generation, the geometric and physical constraints of scaling the capacitor part of the DRAM cells, and the high power consumption caused by the continuous need for refreshing cell values. At the same time, emerging Non- Volatile Memory (NVM) technologies, such as Phase-Change Memory (PCM), are emerging as promising replacements for DRAM. NVMs, when compared to current technologies e.g., NAND-based ash, have latencies comparable to DRAM. Additionally, NVMs are non-volatile,more » which eliminates the need for refresh power and enables persistent memory applications. Finally, NVMs have promising densities and the potential for multi-level cell (MLC) storage.« less
Adjusting to Social Change - A Multi-Level Analysis in Three Cultures
2013-08-01
including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing and reviewing the...presence is often associated with the large-scale movement of civilian populations, and who need to better understand the Distribution A: Approved for...valuing openness to change (self-direction, stimulation and sometimes hedonism values) with valuing conservation (conformity, tradition and security
The Organic Brain Syndrome (OBS) scale: a systematic review.
Björkelund, Karin Björkman; Larsson, Sylvia; Gustafson, Lars; Andersson, Edith
2006-03-01
The Organic Brain Syndrome (OBS) Scale was developed to determine elderly patients' disturbances of awareness and orientation as to time, place and own identity, and assessment of various emotional and behavioural symptoms appearing in delirium, dementia and other organic mental diseases. The aim of the study was to examine the OBS Scale, using the eight criteria and guidelines formulated by the Scientific Advisory Committee of the Medical Outcomes Trust (SAC), and to investigate its relevance and suitability for use in various clinical settings. Systematic search and analysis of papers (30) on the OBS Scale were carried out using the criteria suggested by the SAC. The OBS Scale in many aspects satisfies the requirements suggested by the SAC: conceptual and measurement model, reliability, validity, responsiveness, interpretability, respondent and administrative burden, alternative forms of administration, and cultural and language adaptations, but there is a need for additional evaluation, especially with regard to different forms of reliability, and the translation and adaptation to other languages. The OBS Scale is a sensitive scale which is clinically useful for the description and long-term follow-up of patients showing symptoms of acute confusional state and dementia. Although the OBS Scale has been used in several clinical studies there is need for further evaluation.
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
NASA Astrophysics Data System (ADS)
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.;
2001-01-01
It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.
Goswami, Prashant; Nishad, Shiv Narayan
2015-03-20
Assessment and policy design for sustainability in primary resources like arable land and water need to adopt long-term perspective; even small but persistent effects like net export of water may influence sustainability through irreversible losses. With growing consumption, this virtual water trade has become an important element in the water sustainability of a nation. We estimate and contrast the virtual (embedded) water trades of two populous nations, India and China, to present certain quantitative measures and time scales. Estimates show that export of embedded water alone can lead to loss of water sustainability. With the current rate of net export of water (embedded) in the end products, India is poised to lose its entire available water in less than 1000 years; much shorter time scales are implied in terms of water for production. The two cases contrast and exemplify sustainable and non-sustainable virtual water trade in long term perspective.
NASA Astrophysics Data System (ADS)
Goswami, Prashant; Nishad, Shiv Narayan
2015-03-01
Assessment and policy design for sustainability in primary resources like arable land and water need to adopt long-term perspective; even small but persistent effects like net export of water may influence sustainability through irreversible losses. With growing consumption, this virtual water trade has become an important element in the water sustainability of a nation. We estimate and contrast the virtual (embedded) water trades of two populous nations, India and China, to present certain quantitative measures and time scales. Estimates show that export of embedded water alone can lead to loss of water sustainability. With the current rate of net export of water (embedded) in the end products, India is poised to lose its entire available water in less than 1000 years; much shorter time scales are implied in terms of water for production. The two cases contrast and exemplify sustainable and non-sustainable virtual water trade in long term perspective.
The clinical evaluation of platelet-rich plasma on free gingival graft's donor site wound healing.
Samani, Mahmoud Khosravi; Saberi, Bardia Vadiati; Ali Tabatabaei, S M; Moghadam, Mahdjoube Goldani
2017-01-01
It has been proved that platelet-rich plasma (PRP) can promote wound healing. In this way, PRP can be advantageous in periodontal plastic surgeries, free gingival graft (FGG) being one such surgery. In this randomized split-mouth controlled trial, 10 patients who needed bilateral FGG were selected, and two donor sites were randomly assigned to experience either natural healing or healing-assisted with PRP. The outcome was assessed based on the comparison of the extent of wound closure, Manchester scale, Landry healing scale, visual analog scale, and tissue thickness between the study groups at different time intervals. Repeated measurements of analysis of variance and paired t -test were used. Statistical significance was P ≤ 0.05. Significant differences between the study groups and also across different time intervals were seen in all parameters except for the changes in tissue thickness. PRP accelerates the healing process of wounds and reduces the healing time.
Family needs in the chronic phase after severe brain injury in Denmark.
Doser, Karoline; Norup, Anne
2014-01-01
This preliminary study aimed at investigating (1) changes in the status of family members between time of injury and follow-up in the chronic phase and (2) the most important needs within the family in the chronic phase and whether the needs were perceived as met. The sample comprised 42 relatives (76% female, mean age = 53 years) of patients with severe brain injury, who had received intensive sub-acute rehabilitation. The relatives were contacted in the chronic phase after brain injury. A set of questions about demographics and time spent caregiving for the patient was completed. The relatives completed the revised version of the Family Needs Questionnaire, a questionnaire consisting of 37 items related to different needs following brain injury. Significant changes in status were found in employment (z = -3.464, p = 0.001) and co-habitation (z = -3.317, p = 0.001). The sub-scale 'Health Information' (Mean = 3.50, SD = 0.73) had the highest mean importance rating, whereas the sub-scale 'Emotional support' (Mean = 3.07, SD = 0.79) had the lowest. When combining importance and met ratings, it was found that the five most important needs were only met in 41-50% of the total sample. Occupational and co-habitation status of the relatives was significantly affected by brain injury. A high number of relatives reported family needs not satisfied in the chronic phase. This requires an interventional approach for families to get these needs fulfilled individually, even after rehabilitation.
Hawkins, S J; Evans, A J; Mieszkowska, N; Adams, L C; Bray, S; Burrows, M T; Firth, L B; Genner, M J; Leung, K M Y; Moore, P J; Pack, K; Schuster, H; Sims, D W; Whittington, M; Southward, E C
2017-11-30
Marine ecosystems are subject to anthropogenic change at global, regional and local scales. Global drivers interact with regional- and local-scale impacts of both a chronic and acute nature. Natural fluctuations and those driven by climate change need to be understood to diagnose local- and regional-scale impacts, and to inform assessments of recovery. Three case studies are used to illustrate the need for long-term studies: (i) separation of the influence of fishing pressure from climate change on bottom fish in the English Channel; (ii) recovery of rocky shore assemblages from the Torrey Canyon oil spill in the southwest of England; (iii) interaction of climate change and chronic Tributyltin pollution affecting recovery of rocky shore populations following the Torrey Canyon oil spill. We emphasize that "baselines" or "reference states" are better viewed as envelopes that are dependent on the time window of observation. Recommendations are made for adaptive management in a rapidly changing world. Copyright © 2017. Published by Elsevier Ltd.
Perspectives from the NSF-sponsored workshop on Grand Challenges in Nanomaterials
NASA Astrophysics Data System (ADS)
Hull, Robert
2004-03-01
At an NSF-sponsored workshop in June 2003, about seventy research leaders in the field of nanomaterials met to discuss, explore and identify future new directions and critical needs ("Grand Challenges") for the next decade and beyond. The key pervasive theme that was identified was the need to develop techniques for assembly of nanoscaled materials over multiple lengths scales, at the levels of efficiency, economy, and precision necessary to realize broad new classes of applications in such diverse technologies as electronics, computation, telecommunications, data storage, energy storage / transmission / generation, health care, transportation, civil infrastructure, military applications, national security, and the environment. Elements of this strategy include development of new self-assembly and lithographic techniques; biologically-mediated synthesis; three-dimensional atomic-scale measurement of structure, properties and chemistry; harnessing of the sub-atomic properties of materials such as electron spin and quantum interactions; new computational methods that span all relevant length- and time- scales; a fundamental understanding of acceptable / achievable "fault tolerance" at the nanoscale; and methods for real-time and distributed sensing of nanoscale assembly. A parallel theme was the need to provide education concerning the potential, applications, and benefits of nanomaterials to all components of society and all levels of the educational spectrum. This talk will summarize the conclusions and recommendations from this workshop, and illustrate the future potential of this field through presentation of selected break-through results provided by workshop participants.
Symstad, A.J.; Chapin, F. S.; Wall, D.H.; Gross, K.L.; Huenneke, L.F.; Mittelbach, G.G.; Peters, Debra P.C.; Tilman, D.
2003-01-01
In a growing body of literature from a variety of ecosystems is strong evidence that various components of biodiversity have significant impacts on ecosystem functioning. However, much of this evidence comes from short-term, small-scale experiments in which communities are synthesized from relatively small species pools and conditions are highly controlled. Extrapolation of the results of such experiments to longer time scales and larger spatial scales—those of whole ecosystems—is difficult because the experiments do not incorporate natural processes such as recruitment limitation and colonization of new species. We show how long-term study of planned and accidental changes in species richness and composition suggests that the effects of biodiversity on ecosystem functioning will vary over time and space. More important, we also highlight areas of uncertainty that need to be addressed through coordinated cross-scale and cross-site research.
Tidal dissipation in a viscoelastic planet
NASA Technical Reports Server (NTRS)
Ross, M.; Schubert, G.
1986-01-01
Tidal dissipation is examined using Maxwell standard liner solid (SLS), and Kelvin-Voigt models, and viscosity parameters are derived from the models that yield the amount of dissipation previously calculated for a moon model with QW = 100 in a hypothetical orbit closer to the earth. The relevance of these models is then assessed for simulating planetary tidal responses. Viscosities of 10 exp 14 and 10 ex 18 Pa s for the Kelvin-Voigt and Maxwell rheologies, respectively, are needed to match the dissipation rate calculated using the Q approach with a quality factor = 100. The SLS model requires a short time viscosity of 3 x 10 exp 17 Pa s to match the Q = 100 dissipation rate independent of the model's relaxation strength. Since Q = 100 is considered a representative value for the interiors of terrestrial planets, it is proposed that derived viscosities should characterize planetary materials. However, it is shown that neither the Kelvin-Voigt nor the SLS models simulate the behavior of real planetary materials on long time scales. The Maxwell model, by contrast, behaves realistically on both long and short time scales. The inferred Maxwell viscosity, corresponding to the time scale of days, is several times smaller than the longer time scale (greater than or equal to 10 exp 14 years) viscosity of the earth's mantle.
Exact-Differential Large-Scale Traffic Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios
2015-01-01
Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less
Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E
NASA Technical Reports Server (NTRS)
Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie
2001-01-01
In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2011-01-01
Drought is fundamentally the result of an extended period of reduced precipitation lasting anywhere from a few weeks to decades and even longer. As such, addressing drought predictability and prediction in a changing climate requires foremost that we make progress on the ability to predict precipitation anomalies on subseasonal and longer time scales. From the perspective of the users of drought forecasts and information, drought is however most directly viewed through its impacts (e.g., on soil moisture, streamflow, crop yields). As such, the question of the predictability of drought must extend to those quantities as well. In order to make progress on these issues, the WCRP drought information group (DIG), with the support of WCRP, the Catalan Institute of Climate Sciences, the La Caixa Foundation, the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, and the National Science Foundation, has organized a workshop to focus on: 1. User requirements for drought prediction information on sub-seasonal to centennial time scales 2. Current understanding of the mechanisms and predictability of drought on sub-seasonal to centennial time scales 3. Current drought prediction/projection capabilities on sub-seasonal to centennial time scales 4. Advancing regional drought prediction capabilities for variables and scales most relevant to user needs on sub-seasonal to centennial time scales. This introductory talk provides an overview of these goals, and outlines the occurrence and mechanisms of drought world-wide.
Predicting survival time in noncurative patients with advanced cancer: a prospective study in China.
Cui, Jing; Zhou, Lingjun; Wee, B; Shen, Fengping; Ma, Xiuqiang; Zhao, Jijun
2014-05-01
Accurate prediction of prognosis for cancer patients is important for good clinical decision making in therapeutic and care strategies. The application of prognostic tools and indicators could improve prediction accuracy. This study aimed to develop a new prognostic scale to predict survival time of advanced cancer patients in China. We prospectively collected items that we anticipated might influence survival time of advanced cancer patients. Participants were recruited from 12 hospitals in Shanghai, China. We collected data including demographic information, clinical symptoms and signs, and biochemical test results. Log-rank tests, Cox regression, and linear regression were performed to develop a prognostic scale. Three hundred twenty patients with advanced cancer were recruited. Fourteen prognostic factors were included in the prognostic scale: Karnofsky Performance Scale (KPS) score, pain, ascites, hydrothorax, edema, delirium, cachexia, white blood cell (WBC) count, hemoglobin, sodium, total bilirubin, direct bilirubin, aspartate aminotransferase (AST), and alkaline phosphatase (ALP) values. The score was calculated by summing the partial scores, ranging from 0 to 30. When using the cutoff points of 7-day, 30-day, 90-day, and 180-day survival time, the scores were calculated as 12, 10, 8, and 6, respectively. We propose a new prognostic scale including KPS, pain, ascites, hydrothorax, edema, delirium, cachexia, WBC count, hemoglobin, sodium, total bilirubin, direct bilirubin, AST, and ALP values, which may help guide physicians in predicting the likely survival time of cancer patients more accurately. More studies are needed to validate this scale in the future.
At the Limits of Criticality-Based Quantum Metrology: Apparent Super-Heisenberg Scaling Revisited
NASA Astrophysics Data System (ADS)
Rams, Marek M.; Sierant, Piotr; Dutta, Omyoti; Horodecki, Paweł; Zakrzewski, Jakub
2018-04-01
We address the question of whether the super-Heisenberg scaling for quantum estimation is indeed realizable. We unify the results of two approaches. In the first one, the original system is compared with its copy rotated by the parameter-dependent dynamics. If the parameter is coupled to the one-body part of the Hamiltonian, the precision of its estimation is known to scale at most as N-1 (Heisenberg scaling) in terms of the number of elementary subsystems used N . The second approach compares the overlap between the ground states of the parameter-dependent Hamiltonian in critical systems, often leading to an apparent super-Heisenberg scaling. However, we point out that if one takes into account the scaling of time needed to perform the necessary operations, i.e., ensuring adiabaticity of the evolution, the Heisenberg limit given by the rotation scenario is recovered. We illustrate the general theory on a ferromagnetic Heisenberg spin chain example and show that it exhibits such super-Heisenberg scaling of ground-state fidelity around the critical value of the parameter (magnetic field) governing the one-body part of the Hamiltonian. Even an elementary estimator represented by a single-site magnetization already outperforms the Heisenberg behavior providing the N-1.5 scaling. In this case, Fisher information sets the ultimate scaling as N-1.75, which can be saturated by measuring magnetization on all sites simultaneously. We discuss universal scaling predictions of the estimation precision offered by such observables, both at zero and finite temperatures, and support them with numerical simulations in the model. We provide an experimental proposal of realization of the considered model via mapping the system to ultracold bosons in a periodically shaken optical lattice. We explicitly derive that the Heisenberg limit is recovered when the time needed for preparation of quantum states involved is taken into account.
Flexible server-side processing of climate archives
NASA Astrophysics Data System (ADS)
Juckes, Martin; Stephens, Ag; Damasio da Costa, Eduardo
2014-05-01
The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.
Flexible server-side processing of climate archives
NASA Astrophysics Data System (ADS)
Juckes, M. N.; Stephens, A.; da Costa, E. D.
2013-12-01
The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.
Scope of Nursing Care in Polish Intensive Care Units
Wysokiński, Mariusz; Ksykiewicz-Dorota, Anna; Fidecki, Wiesław
2013-01-01
Introduction. The TISS-28 scale, which may be used for nursing staff scheduling in ICU, does not reflect the complete scope of nursing resulting from varied cultural and organizational conditions of individual systems of health care. Aim. The objective of the study was an attempt to provide an answer to the question what scope of nursing care provided by Polish nurses in ICU does the TISS-28 scale reflect? Material and Methods. The methods of working time measurement were used in the study. For the needs of the study, 252 hours of continuous observation (day-long observation) and 3.697 time-schedule measurements were carried out. Results. The total nursing time was 4125.79 min. (68.76 hours), that is, 60.15% of the total working time of Polish nurses during the period analyzed. Based on the median test, the difference was observed on the level of χ 2 = 16945.8,P < 0.001 between the nurses' workload resulting from performance of activities qualified into the TISS-28 scale and load resulting from performance of interventions within the scopes of care not considered in this scale in Polish ICUs. Conclusions. The original version of the TISS-28 scale does not fully reflect the workload among Polish nurses employed in ICUs. PMID:24490162
A Sub-ps Stability Time Transfer Method Based on Optical Modems.
Frank, Florian; Stefani, Fabio; Tuckey, Philip; Pottie, Paul-Eric
2018-06-01
Coherent optical fiber links recently demonstrate their ability to compare the most advanced optical clocks over a continental scale. The outstanding performances of the optical clocks are stimulating the community to build much more stable time scales, and to develop the means to compare them. Optical fiber link is one solution that needs to be explored. Here, we are investigating a new method to transfer time based on an optical demodulation of a phase step imprint onto the optical carrier. We show the implementation of a proof-of-principle experiment over 86-km urban fiber, and report time interval transfer stability of 1 pulse per second signal with sub-ps resolution from 10 s to one day of measurement time. Prospects for future development and implementation in active telecommunication networks, not only regarding performance but also compatibility, conclude this paper.
Babenko, Oksana; Mosewich, Amber; Abraham, Joseph; Lai, Hollis
2018-01-01
To investigate the contributions of psychological needs (autonomy, competence, and relatedness) and coping strategies (self-compassion, leisure-time exercise, and achievement goals) to engagement and exhaustion in Canadian medical students. This was an observational study. Two hundred undergraduate medical students participated in the study: 60.4% were female, 95.4% were 20-29 years old, and 23.0% were in year 1, 30.0% in year 2, 21.0% in year 3, and 26.0% in year 4. Students completed an online survey with measures of engagement and exhaustion from the Oldenburg Burnout Inventory-student version; autonomy, competence, and relatedness from the Basic Psychological Needs Scale; self-compassion from the Self-Compassion Scale-short form; leisure-time exercise from the Godin Leisure-Time Exercise Questionnaire; and mastery approach, mastery avoidance, performance approach, and performance avoidance goals from the Achievement Goals Instrument. Descriptive and inferential analyses were performed. The need for competence was the strongest predictor of student engagement (β= 0.35, P= 0.000) and exhaustion (β= -0.33, P= 0.000). Students who endorsed mastery approach goals (β= 0.21, P= 0.005) and who were more self-compassionate (β= 0.13, P= 0.050) reported greater engagement with their medical studies. Students who were less self-compassionate (β= -0.32, P= 0.000), who exercised less (β= -0.12, P= 0.044), and who endorsed mastery avoidance goals (β= 0.22, P= 0.003) reported greater exhaustion from their studies. Students' gender (β= 0.18, P= 0.005) and year in medical school (β= -0.18, P= 0.004) were related to engagement, but not to exhaustion. Supporting students' need for competence and raising students' awareness of self-compassion, leisure-time exercise, and mastery approach goals may help protect students from burnout-related exhaustion and enhance their engagement with their medical school studies.
The Cost of Commonality: Assessing Value in Joint Programs
2015-12-01
this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data ...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden...economies of scale in order to provide cheaper goods. When those economies of scale are not realized, as was the case with the U.S. auto market “ Big Three
Lopes, A G; Keshavarz-Moore, E
2013-01-01
During centrifugation operation, the major challenge in the recovery of extracellular proteins is the removal of the maximum liquid entrapped within the spaces between the settled solids-dewatering level. The ability of the scroll decanter centrifuge (SDC) to process continuously large amounts of feed material with high concentration of solids without the need for resuspension of feeds, and also to achieve relatively high dewatering, could be of great benefit for future use in the biopharmaceutical industry. However, for reliable prediction of dewatering in such a centrifuge, tests using the same kind of equipment at pilot-scale are required, which are time consuming and costly. To alleviate the need of pilot-scale trials, a novel USD device, with reduced amounts of feed (2 mL) and to be used in the laboratory, was developed to predict the dewatering levels of a SDC. To verify USD device, dewatering levels achieved were plotted against equivalent compression (Gtcomp ) and decanting (Gtdec ) times, obtained from scroll rates and feed flow rates operated at pilot-scale, respectively. The USD device was able to successfully match dewatering trends of the pilot-scale as a function of both Gtcomp and Gtdec , particularly for high cell density feeds, hence accounting for all key variables that influenced dewatering in a SDC. In addition, it accurately mimicked the maximum dewatering performance of the pilot-scale equipment. Therefore the USD device has the potential to be a useful tool at early stages of process development to gather performance data in the laboratory thus minimizing lengthy and costly runs with pilot-scale SDC. © 2013 American Institute of Chemical Engineers.
Developing a Renewable Energy Awareness Scale for Pre-Service Chemistry Teachers
ERIC Educational Resources Information Center
Morgil, Inci; Secken, Nilgun; Yucel, A. Seda; Ozyalcin Oskay, Ozge; Yavuz, Soner; Ural, Evrim
2006-01-01
In times when human beings used to live in a natural environment, their needs were also provided for by natural resources. With the increase in population over time, human beings started to look for new resources willing to get "the most" and "the fastest". Just like the invention of steam, first, they increased the density of the resources and…
ERIC Educational Resources Information Center
Schoel, Jim; Butler, Steve; Murray, Mark; Gass, Mike; Carrick, Moe
2001-01-01
Presents five group problem-solving initiatives for use in adventure and experiential settings, focusing on conflict resolution, corporate workplace issues, or adjustment to change. Includes target group, group size, time and space needs, activity level, overview, goals, props, instructions, and suggestions for framing and debriefing the…
Bypassing the Kohn-Sham equations with machine learning.
Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert
2017-10-11
Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
NASA Astrophysics Data System (ADS)
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
Energy Management and Optimization Methods for Grid Energy Storage Systems
Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.; ...
2017-08-24
Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less
Energy Management and Optimization Methods for Grid Energy Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.
Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less
A Global Estimate of Seafood Consumption by Coastal Indigenous Peoples
Pauly, Daniel; Weatherdon, Lauren V.
2016-01-01
Coastal Indigenous peoples rely on ocean resources and are highly vulnerable to ecosystem and economic change. Their challenges have been observed and recognized at local and regional scales, yet there are no global-scale analyses to inform international policies. We compile available data for over 1,900 coastal Indigenous communities around the world representing 27 million people across 87 countries. Based on available data at local and regional levels, we estimate a total global yearly seafood consumption of 2.1 million (1.5 million–2.8 million) metric tonnes by coastal Indigenous peoples, equal to around 2% of global yearly commercial fisheries catch. Results reflect the crucial role of seafood for these communities; on average, consumption per capita is 15 times higher than non-Indigenous country populations. These findings contribute to an urgently needed sense of scale to coastal Indigenous issues, and will hopefully prompt increased recognition and directed research regarding the marine knowledge and resource needs of Indigenous peoples. Marine resources are crucial to the continued existence of coastal Indigenous peoples, and their needs must be explicitly incorporated into management policies. PMID:27918581
A Global Estimate of Seafood Consumption by Coastal Indigenous Peoples.
Cisneros-Montemayor, Andrés M; Pauly, Daniel; Weatherdon, Lauren V; Ota, Yoshitaka
2016-01-01
Coastal Indigenous peoples rely on ocean resources and are highly vulnerable to ecosystem and economic change. Their challenges have been observed and recognized at local and regional scales, yet there are no global-scale analyses to inform international policies. We compile available data for over 1,900 coastal Indigenous communities around the world representing 27 million people across 87 countries. Based on available data at local and regional levels, we estimate a total global yearly seafood consumption of 2.1 million (1.5 million-2.8 million) metric tonnes by coastal Indigenous peoples, equal to around 2% of global yearly commercial fisheries catch. Results reflect the crucial role of seafood for these communities; on average, consumption per capita is 15 times higher than non-Indigenous country populations. These findings contribute to an urgently needed sense of scale to coastal Indigenous issues, and will hopefully prompt increased recognition and directed research regarding the marine knowledge and resource needs of Indigenous peoples. Marine resources are crucial to the continued existence of coastal Indigenous peoples, and their needs must be explicitly incorporated into management policies.
Karain, Wael I
2017-11-28
Proteins undergo conformational transitions over different time scales. These transitions are closely intertwined with the protein's function. Numerous standard techniques such as principal component analysis are used to detect these transitions in molecular dynamics simulations. In this work, we add a new method that has the ability to detect transitions in dynamics based on the recurrences in the dynamical system. It combines bootstrapping and recurrence quantification analysis. We start from the assumption that a protein has a "baseline" recurrence structure over a given period of time. Any statistically significant deviation from this recurrence structure, as inferred from complexity measures provided by recurrence quantification analysis, is considered a transition in the dynamics of the protein. We apply this technique to a 132 ns long molecular dynamics simulation of the β-Lactamase Inhibitory Protein BLIP. We are able to detect conformational transitions in the nanosecond range in the recurrence dynamics of the BLIP protein during the simulation. The results compare favorably to those extracted using the principal component analysis technique. The recurrence quantification analysis based bootstrap technique is able to detect transitions between different dynamics states for a protein over different time scales. It is not limited to linear dynamics regimes, and can be generalized to any time scale. It also has the potential to be used to cluster frames in molecular dynamics trajectories according to the nature of their recurrence dynamics. One shortcoming for this method is the need to have large enough time windows to insure good statistical quality for the recurrence complexity measures needed to detect the transitions.
Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis
NASA Astrophysics Data System (ADS)
Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A. B. A.
2017-07-01
One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future global and regional SLR, corresponding uncertainties in contemporary ESL have not been assessed and projections are limited. Here we quantify, for the first time at global scale, the uncertainties in present-day ESL estimates, which have by default been ignored in broad-scale sea-level rise impact assessments to date. ESL uncertainties exceed those from global SLR projections and, assuming that we meet the Paris agreement goals, the projected SLR itself by the end of the century in many regions. Both uncertainties in SLR projections and ESL estimates need to be understood and combined to fully assess potential impacts and adaptation needs.
The Grand Challenge of Scale in Scientific Hydrology: Some Personal Reflections
NASA Astrophysics Data System (ADS)
Gupta, V. K.
2009-12-01
Scale issues in hydrology have shaped my entire scientific career. I first recognized the challenge of scale during the 1970s in linking multi-scale hydrologic processes through collaborative work on solute transport in saturated porous media. Linking geometry, dynamics and statistics, and the role of diagnostics in testing theoretical predictions against experimental observations, played a foundational role. This foundation has guided the rest of my multi-scale research on larger space-time scales of river basins, regional, and global. After the blue book was published in 1991, NSF needed a futuristic implementation plan for the blue book, but did not communicate it to Pete. I came to know of it in 1998 after six years of pursuing an ‘open-ended agenda’ in which Doug played a key role. The upper management of the Geosciences Directorate first mentioned to me in 1998 that the blue book needed a broad and futuristic implementation plan. It led to the Water, Earth, and Biota (WEB) report in 2000 following an NSF-funded workshop in 1999. The multi-scale nature of hydrology served as the central organizing theme for the WEB report. The history from 1984 to 2001 is summarized on the CUAHSI web page under “history”, so I will only share a few personal reflections from this period. Where do we go from here? My perspective is that an urgent need exists to modernize hydrology curriculum that should include the progress that has been made in addressing multi-scale challenges. I will share some personal reflections, both intellectual and administrative, from my experiences in implementing a graduate hydrology science program at the University of Colorado after joining it in 1989.
NASA Astrophysics Data System (ADS)
McQuinn, Kristen B. W.; Skillman, Evan D.; Heilman, Taryn N.; Mitchell, Noah P.; Kelley, Tyler
2018-07-01
Winds are predicted to be ubiquitous in low-mass, actively star-forming galaxies. Observationally, winds have been detected in relatively few local dwarf galaxies, with even fewer constraints placed on their time-scales. Here, we compare galactic outflows traced by diffuse, soft X-ray emission from Chandra Space Telescope archival observations to the star formation histories derived from Hubble Space Telescope imaging of the resolved stellar populations in six starburst dwarfs. We constrain the longevity of a wind to have an upper limit of 25 Myr based on galaxies whose starburst activity has already declined, although a larger sample is needed to confirm this result. We find an average 16 per cent efficiency for converting the mechanical energy of stellar feedback to thermal, soft X-ray emission on the 25 Myr time-scale, somewhat higher than simulations predict. The outflows have likely been sustained for time-scales comparable to the duration of the starbursts (i.e. 100s Myr), after taking into account the time for the development and cessation of the wind. The wind time-scales imply that material is driven to larger distances in the circumgalactic medium than estimated by assuming short, 5-10 Myr starburst durations, and that less material is recycled back to the host galaxy on short time-scales. In the detected outflows, the expelled hot gas shows various morphologies that are not consistent with a simple biconical outflow structure. The sample and analysis are part of a larger program, the STARBurst IRregular Dwarf Survey (STARBIRDS), aimed at understanding the life cycle and impact of starburst activity in low-mass systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dustin Popp; Zander Mausolff; Sedat Goluoglu
We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operationmore » is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).« less
Porta, Alberto; Bari, Vlasta; Marchi, Andrea; De Maria, Beatrice; Cysarz, Dirk; Van Leeuwen, Peter; Takahashi, Anielle C. M.; Catai, Aparecida M.; Gnecchi-Ruscone, Tomaso
2015-01-01
Two diverse complexity metrics quantifying time irreversibility and local prediction, in connection with a surrogate data approach, were utilized to detect nonlinear dynamics in short heart period (HP) variability series recorded in fetuses, as a function of the gestational period, and in healthy humans, as a function of the magnitude of the orthostatic challenge. The metrics indicated the presence of two distinct types of nonlinear HP dynamics characterized by diverse ranges of time scales. These findings stress the need to render more specific the analysis of nonlinear components of HP dynamics by accounting for different temporal scales. PMID:25806002
A Novel Electronic Data Collection System for Large-Scale Surveys of Neglected Tropical Diseases
King, Jonathan D.; Buolamwini, Joy; Cromwell, Elizabeth A.; Panfel, Andrew; Teferi, Tesfaye; Zerihun, Mulat; Melak, Berhanu; Watson, Jessica; Tadesse, Zerihun; Vienneau, Danielle; Ngondi, Jeremiah; Utzinger, Jürg; Odermatt, Peter; Emerson, Paul M.
2013-01-01
Background Large cross-sectional household surveys are common for measuring indicators of neglected tropical disease control programs. As an alternative to standard paper-based data collection, we utilized novel paperless technology to collect data electronically from over 12,000 households in Ethiopia. Methodology We conducted a needs assessment to design an Android-based electronic data collection and management system. We then evaluated the system by reporting results of a pilot trial and from comparisons of two, large-scale surveys; one with traditional paper questionnaires and the other with tablet computers, including accuracy, person-time days, and costs incurred. Principle Findings The electronic data collection system met core functions in household surveys and overcame constraints identified in the needs assessment. Pilot data recorders took 264 (standard deviation (SD) 152 sec) and 260 sec (SD 122 sec) per person registered to complete household surveys using paper and tablets, respectively (P = 0.77). Data recorders felt a lack of connection with the interviewee during the first days using electronic devices, but preferred to collect data electronically in future surveys. Electronic data collection saved time by giving results immediately, obviating the need for double data entry and cross-correcting. The proportion of identified data entry errors in disease classification did not differ between the two data collection methods. Geographic coordinates collected using the tablets were more accurate than coordinates transcribed on a paper form. Costs of the equipment required for electronic data collection was approximately the same cost incurred for data entry of questionnaires, whereas repeated use of the electronic equipment may increase cost savings. Conclusions/Significance Conducting a needs assessment and pilot testing allowed the design to specifically match the functionality required for surveys. Electronic data collection using an Android-based technology was suitable for a large-scale health survey, saved time, provided more accurate geo-coordinates, and was preferred by recorders over standard paper-based questionnaires. PMID:24066147
EPIDEMIOLOGY IN RISK ASSESSMENT FOR REGULATORY POLICY
Epidemiology and risk assessment have several of the features needed to make the difficult decisions required in setting standards for levels of toxic agents in the workplace and environment. hey differ in their aims, orientation, and time scale. While the distribution of disease...
Role of the BIPM in UTC Dissemination to the Real Time User
NASA Technical Reports Server (NTRS)
Quinn, T. J.; Thomas, C.
1996-01-01
The generation and dissemination of International Atomic Time (TAI), and Coordinated Universal Time (UTC) are explicitly mentioned in the list of principal tasks of the Bureau International des Poids et Mesures (BIPM), that appears in the Compes Rendus of the the 18e Conference Generales des Poids et Measures, in 1987. These time scales are used as the ultimate reference in the most demanding scientific applications and must, therefore, be of the best metrological quality in terms of reliability, long term stability, and conformity of the scale interval with the second, the unit of time of the International System of Units. To meet these requirements, it is necessary that the readings of the atomic clocks, spread all over the world, that are used as basic timing data for TAI and UTC generation, must be combined in the most efficient way possible. In particular, to take full advantage of the quality of each contributing clock calls for observation of its performance over a sufficiently long time. At present, the computation period treats data in blocks covering two months. TAI and UTC are thus deferred-time scales that cannot be immediately available to real-time users. The BIPM can, nevertheless be of help to real-time users. The predictability of UTC is a fundamental attribute of the scale for institutions responsible for the dissemination of real-time time scales. It allows them to improve their local representations of UTC and, thus, implement a more thorough steering of the time scales diffused in real-time. With a view to improving the predicatbility of UTC, the BIPM examines in detail timing techniques and basic theories in order to propose alternative solutions for timing algorithms. This, coupled with a recent improvement of timing data, makes UTC more stable and thus, more predictable. At a more practical level, effort is being devoted to putting in place automatic procedures for reducing the time needed for data collection and treatment: monthly results are already available ten days earlier than before.
Topological defects after a quench in a Bénard-Marangoni convection system.
Casado, S; González-Viñas, W; Mancini, H; Boccaletti, S
2001-05-01
We report experimental evidence of the fact that, in a Bénard-Marangoni conduction-convection transition, the density of defects in the emerging structure scales as a power law in the quench time needed for the control parameter to ramp through the threshold. The obtained scaling exponents differ from the ones predicted and observed in the case in which the defects correspond to zeros in the amplitude of the global two-dimensional field.
Sashika, Hironobu; Takada, Kaoruko; Kikuchi, Naohisa
2017-01-01
Abstract The purpose of this study was to clarify psychosocial factors/problems, social participation, quality of life (QOL), and rehabilitation needs in chronic-phase traumatic brain injury (TBI) patients with cognitive disorder discharged from the level-1 trauma center (L1-TC), and to inspect the effects of rehabilitation intervention to these subjects. A mixed-method research (cross-sectional and qualitative study) was conducted at an outpatient rehabilitation department. Inclusion criteria of subjects were transfer to the L1-TC due to TBI; acute-stage rehabilitation treatment received in the L1-TC from November 2006 to October 2011; age of ≥18 and <70 years at the time of injury; a score of 0–3 on the Modified Rankin Scale at discharge and that of 4–5 due to physical or severe aggressive behavioral comorbid disorders. Study details were sent, via mail, to 84 suitable candidates, of whom 36 replied. Thirty-one subjects (median age: 33.4 years; male: 17; and average time since injury: 48.1 months), who had consented to study participation, were participated. Cognitive function, social participation, QOL, psychosocial factors/problems, rehabilitation needs, and chronic-phase rehabilitation outcomes were evaluated using the Wechsler Adult Intelligence Scale, Third Edition, the Wechsler Memory Scale-Revised, the Zung Self-Rating Depression Scale, the Sydney Psychosocial Reintegration Scale, Version 2, and the Short Form 36, Version 2, qualitative analysis of semistructured interviews, etc. Participants were classified into achieved-social-participation (n = 11; employed: 8), difficult-social-participation (n = 12; unemployed: 8), and no-cognitive-dysfunction groups (n = 8; no social participation restriction). Relative to the achieved-social-participation group, the difficult-social-participation group showed greater injury and cognitive dysfunction and lower Sydney Psychosocial Reintegration Scale and Short Form 36 role/social component summary scores (64.9/49.1 vs 44.3/30.4, respectively, P < 0.05). Linear regression analysis showed that the social participation status was greatly affected by the later cognitive disorders and psychosocial factors/problems not by the severity of TBI. No changes were observed in these scores following chronic-phase rehabilitation intervention. Chronic-phase TBI with cognitive disorder led to rehabilitation needs, and improvement of subjects’ psychosocial problems and QOL was difficult. PMID:28121947
Sashika, Hironobu; Takada, Kaoruko; Kikuchi, Naohisa
2017-01-01
The purpose of this study was to clarify psychosocial factors/problems, social participation, quality of life (QOL), and rehabilitation needs in chronic-phase traumatic brain injury (TBI) patients with cognitive disorder discharged from the level-1 trauma center (L1-TC), and to inspect the effects of rehabilitation intervention to these subjects.A mixed-method research (cross-sectional and qualitative study) was conducted at an outpatient rehabilitation department.Inclusion criteria of subjects were transfer to the L1-TC due to TBI; acute-stage rehabilitation treatment received in the L1-TC from November 2006 to October 2011; age of ≥18 and <70 years at the time of injury; a score of 0-3 on the Modified Rankin Scale at discharge and that of 4-5 due to physical or severe aggressive behavioral comorbid disorders. Study details were sent, via mail, to 84 suitable candidates, of whom 36 replied. Thirty-one subjects (median age: 33.4 years; male: 17; and average time since injury: 48.1 months), who had consented to study participation, were participated. Cognitive function, social participation, QOL, psychosocial factors/problems, rehabilitation needs, and chronic-phase rehabilitation outcomes were evaluated using the Wechsler Adult Intelligence Scale, Third Edition, the Wechsler Memory Scale-Revised, the Zung Self-Rating Depression Scale, the Sydney Psychosocial Reintegration Scale, Version 2, and the Short Form 36, Version 2, qualitative analysis of semistructured interviews, etc.Participants were classified into achieved-social-participation (n = 11; employed: 8), difficult-social-participation (n = 12; unemployed: 8), and no-cognitive-dysfunction groups (n = 8; no social participation restriction). Relative to the achieved-social-participation group, the difficult-social-participation group showed greater injury and cognitive dysfunction and lower Sydney Psychosocial Reintegration Scale and Short Form 36 role/social component summary scores (64.9/49.1 vs 44.3/30.4, respectively, P < 0.05). Linear regression analysis showed that the social participation status was greatly affected by the later cognitive disorders and psychosocial factors/problems not by the severity of TBI. No changes were observed in these scores following chronic-phase rehabilitation intervention.Chronic-phase TBI with cognitive disorder led to rehabilitation needs, and improvement of subjects' psychosocial problems and QOL was difficult.
NASA Astrophysics Data System (ADS)
Day, Danny
2006-04-01
Although `negative emissions' of carbon dioxide need not, in principle, involve use of biological processes to draw carbon out of the atmosphere, such `agricultural' sequestration' is the only known way to remove carbon from the atmosphere on time scales comparable to the time scale for anthropogenic increases in carbon emissions. In order to maintain the `negative emissions' the biomass must be used in such a way that the resulting carbon dioxide is separated and permanently sequestered. Two options for sequestration are in the topsoil and via geologic carbon sequestration. The former has multiple benefits, but the latter also is needed. Thus, although geologic carbon sequestration is viewed skeptically by some environmentalists as simply a way to keep using fossil fuels---it may be a key part of reversing accelerating climate forcing if rapid climate change is beginning to occur. I will first review the general approach of agricultural sequestration combined with use of resulting biofuels in a way that permits carbon separation and then geologic sequestration as a negative emissions technology. Then I discuss the process that is the focus of my company---the EPRIDA cycle. If deployed at a sufficiently large scale, it could reverse the increase in CO2 concentrations. I also estimate of benefits --carbon and other---of large scale deployment of negative emissions technologies. For example, using the EPRIDA cycle by planting and soil sequestering carbon in an area abut In 3X the size of Texas would remove the amount of carbon that is being accumulated worldwide each year. In addition to the atmospheric carbon removal, the EPRIDA approach also counters the depletion of carbon in the soil---increasing topsoil and its fertility; reduces the excess nitrogen in the water by eliminating the need for ammonium nitrate fertilizer and reduces fossil fuel reliance by providing biofuel and avoiding natural gas based fertilizer production.
Estimating the need for dental sedation. 2. Using IOSN as a health needs assessment tool.
Pretty, I A; Goodwin, M; Coulthard, P; Bridgman, C M; Gough, L; Jenner, T; Sharif, M O
2011-09-09
This service evaluation assessed the need for sedation in a population of dental attenders (n = 607) in the North West of England. Using the novel IOSN tool, three clinical domains of sedation need were assessed: treatment complexity, medical and behavioural indicators and patient reported anxiety using the Modified Dental Anxiety Scale. The findings suggest that 5% of the population are likely to require a course of treatment under sedation at some time. All three clinical domains contributed to the IOSN score and indication of treatment need. Females were 3.8 times more likely than males to be placed within the high need for sedation group. Factors such as age, deprivation and practice location were not associated with the need for sedation. Primary care trusts (PCTs) need health needs assessment data in order to commission effectively and in line with World Class Commissioning guidelines. This study provides both an indicative figure of need as well as a tool by which individual PCTs can undertake local health needs assessment work. Caution should be taken with the figure as a total need within a population as the study has only included those patients that attended dental practices.
Newton, T
2011-09-09
This service evaluation assessed the need for sedation in a population of dental attenders (n = 607) in the North West of England. Using the novel IOSN tool, three clinical domains of sedation need were assessed: treatment complexity, medical and behavioural indicators and patient reported anxiety using the Modified Dental Anxiety Scale. The findings suggest that 5% of the population are likely to require a course of treatment under sedation at some time. All three clinical domains contributed to the IOSN score and indication of treatment need. Females were 3.8 times more likely than males to be placed within the high need for sedation group. Factors such as age, deprivation and practice location were not associated with the need for sedation. Primary care trusts (PCTs) need health needs assessment data in order to commission effectively and in line with World Class Commissioning guidelines. This study provides both an indicative figure of need as well as a tool by which individual PCTs can undertake local health needs assessment work. Caution should be taken with the figure as a total need within a population as the study has only included those patients that attended dental practices.
Tang, G.; Yuan, F.; Bisht, G.; ...
2015-12-17
We explore coupling to a configurable subsurface reactive transport code as a flexible and extensible approach to biogeochemistry in land surface models; our goal is to facilitate testing of alternative models and incorporation of new understanding. A reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant uptake is used as an example. We implement the reactions in the open-source PFLOTRAN code, coupled with the Community Land Model (CLM), and test at Arctic, temperate, and tropical sites. To make the reaction network designed for use in explicit time stepping in CLM compatible with the implicit time stepping used in PFLOTRAN,more » the Monod substrate rate-limiting function with a residual concentration is used to represent the limitation of nitrogen availability on plant uptake and immobilization. To achieve accurate, efficient, and robust numerical solutions, care needs to be taken to use scaling, clipping, or log transformation to avoid negative concentrations during the Newton iterations. With a tight relative update tolerance to avoid false convergence, an accurate solution can be achieved with about 50 % more computing time than CLM in point mode site simulations using either the scaling or clipping methods. The log transformation method takes 60–100 % more computing time than CLM. The computing time increases slightly for clipping and scaling; it increases substantially for log transformation for half saturation decrease from 10 −3 to 10 −9 mol m −3, which normally results in decreasing nitrogen concentrations. The frequent occurrence of very low concentrations (e.g. below nanomolar) can increase the computing time for clipping or scaling by about 20 %; computing time can be doubled for log transformation. Caution needs to be taken in choosing the appropriate scaling factor because a small value caused by a negative update to a small concentration may diminish the update and result in false convergence even with very tight relative update tolerance. As some biogeochemical processes (e.g., methane and nitrous oxide production and consumption) involve very low half saturation and threshold concentrations, this work provides insights for addressing nonphysical negativity issues and facilitates the representation of a mechanistic biogeochemical description in earth system models to reduce climate prediction uncertainty.« less
A new framework for climate sensitivity and prediction: a modelling perspective
NASA Astrophysics Data System (ADS)
Ragone, Francesco; Lucarini, Valerio; Lunkeit, Frank
2016-03-01
The sensitivity of climate models to increasing CO2 concentration and the climate response at decadal time-scales are still major factors of uncertainty for the assessment of the long and short term effects of anthropogenic climate change. While the relative slow progress on these issues is partly due to the inherent inaccuracies of numerical climate models, this also hints at the need for stronger theoretical foundations to the problem of studying climate sensitivity and performing climate change predictions with numerical models. Here we demonstrate that it is possible to use Ruelle's response theory to predict the impact of an arbitrary CO2 forcing scenario on the global surface temperature of a general circulation model. Response theory puts the concept of climate sensitivity on firm theoretical grounds, and addresses rigorously the problem of predictability at different time-scales. Conceptually, these results show that performing climate change experiments with general circulation models is a well defined problem from a physical and mathematical point of view. Practically, these results show that considering one single CO2 forcing scenario is enough to construct operators able to predict the response of climatic observables to any other CO2 forcing scenario, without the need to perform additional numerical simulations. We also introduce a general relationship between climate sensitivity and climate response at different time scales, thus providing an explicit definition of the inertia of the system at different time scales. This technique allows also for studying systematically, for a large variety of forcing scenarios, the time horizon at which the climate change signal (in an ensemble sense) becomes statistically significant. While what we report here refers to the linear response, the general theory allows for treating nonlinear effects as well. These results pave the way for redesigning and interpreting climate change experiments from a radically new perspective.
Hirano, Yoshitake; Nitta, Osamu; Hayashi, Takeshi; Takahashi, Hidetoshi; Miyazaki, Yasuhiro; Kigawa, Hiroshi
2017-07-01
For patients with severe hemiplegia in a rehabilitation hospital, early prediction of the functional prognosis and outcomes is challenging. The purpose of this study was to create and verify a prognostic scale in severely hemiplegic stroke patients and allowing for prediction of (1) the ability to walk at the time of hospital discharge, (2) the ability to carry out activities of daily living (ADL), and (3) feasibility of home discharge. The study was conducted on 80 severely hemiplegic stroke patients. A prognostic scale was created as an analysis method using the following items: mini-mental state examination (MMSE) at the time of admission, modified NIH stroke scale (m-NIHSS); trunk control test (TCT); and the ratio of the knee extensor strength on the non-paralyzed side to the body weight (KES/BW-US). We verified the reliability and validity of this scale. We established a prognostic scale using the MMSE, m-NIHSS, TCT, and KES/BW-US. A score of 56.8 or higher on the prognostic scale suggested that the patient would be able to walk and that assistance with ADL would be unnecessary at the time of hospital discharge. In addition, a score of 41.3 points indicated that the patient's return home was feasible. The reliability and the results were in good agreement. These findings showed that the ability or inability to walk was predictable in 85%, the need of assistance with ADL in 82.5%, and the feasibility of home return in 76.3% of cases. At the time of admission, four evaluation items permitted the prediction of three outcomes at time of discharge. Our formula predicts three outcomes with an accuracy of more than 76%. Copyright © 2017 Elsevier B.V. All rights reserved.
Schepp, Rutger M; Berbers, Guy A M; Ferreira, José A; Reimerink, Johan H; van der Klis, Fiona R
2017-03-01
Large-scale serosurveillance or vaccine studies for poliovirus using the "gold standard" WHO neutralisation test (NT) are very laborious and time consuming. With the polio eradication at hand and with the removal of live attenuated Sabin strains from the oral poliovirus vaccine (OPV), starting with type 2 (as of April 2016), laboratories will need to conform to much more stringent laboratory biosafety regulations when handling live poliovirus strains. In this study, a poliovirus binding inhibition multiplex immunoassay (polio MIA) using inactivated poliovirus vaccine (IPV-Salk) was developed for simultaneous quantification of serum antibodies directed to all three poliovirus types. Our assay shows a good correlation with the NT and an excellent correlation with the ELISA-based binding inhibition assay (POBI). The assay is highly type-specific and reproducible. Additionally, serum sample throughput increases about fivefold relative to NT and POBI and the amount of serum needed is reduced by more than 90%. In conclusion, the polio MIA can be used as a safe and high throughput application, especially for large-scale surveillance and vaccine studies, reducing laboratory time and serum amounts needed. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Tillery, Anne; Eggleston, Jack R.
2012-01-01
The six Middle Rio Grande Pueblos have prior and paramount rights to deliveries of water from the Rio Grande for their use. When the pueblos or the Bureau of Indian Affairs Designated Engineer identifies a need for additional flow on the Rio Grande, the Designated Engineer is tasked with deciding the timing and amount of releases of prior and paramount water from storage at El Vado Reservoir to meet the needs of the pueblos. Over the last three decades, numerous models have been developed by Federal, State, and local agencies in New Mexico to simulate, understand, and (or) manage flows in the Middle Rio Grande upstream from Elephant Butte Reservoir. In 2008, the Coalition of Six Middle Rio Grande Basin Pueblos entered into a cooperative agreement with the U.S. Geological Survey to conduct a comprehensive survey of these hydrologic models and their capacity to quantify and track various components of flow. The survey of hydrologic models provided in this report will help water-resource managers at the pueblos, as well as the Designated Engineer, make informed water-resource-management decisions that affect the prior and paramount water use. Analysis of 4 publicly available surface-water models and 13 publicly available groundwater models shows that, although elements from many models can be helpful in tracking flow in the Rio Grande, numerous data gaps and modeling needs indicate that accurate, consistent, and timely tracking of flow on the Rio Grande could be improved. Deficient or poorly constrained hydrologic variables are sources of uncertainty in hydrologic models that can be reduced with the acquisition of more refined data. Data gaps need to be filled to allow hydrologic models to be run on a real-time basis and thus ensure predictable water deliveries to meet needs for irrigation, domestic, stock, and other water uses. Timeliness of flow-data reporting is necessary to facilitate real-time model simulation, but even daily data are sometimes difficult to obtain because the data come from multiple sources. Each surface-water model produces results that could be helpful in quantifying the flow of the Rio Grande, specifically by helping to track water as it moves down the channel of the Rio Grande and by improving the understanding of river hydraulics for the specified reaches. The ability of each surface-water model to track flow on the Rio Grande varies according to the purpose for which each model was designed. The purpose of Upper Rio Grande Water Operations Model (URGWOM) - to simulate water storage and delivery operations in the Rio Grande - is more applicable to tracking flow on the Rio Grande than are any of the other surface-water models surveyed. Specifically, the strengths of URGWOM in relation to modeling flow are the details and attention given to the accounting of Rio Grande flow and San Juan-Chama flow at a daily time step. The most significant difficulty in using any of the surveyed surface-water models for the purpose of predicting the need for requested water releases is that none of the surface-water models surveyed consider water accounting on a real-time basis. Groundwater models that provide detailed simulations of shallow groundwater flow in the vicinity of the Rio Grande can provide large-scale estimates of flow between the Rio Grande and shallow aquifers, which can be an important component of the Rio Grande water budget as a whole. The groundwater models surveyed for this report cannot, however, be expected to provide simulations of flow at time scales of less than the simulated time step (1 month to 1 year in most cases). Of those of the currently used groundwater models, the purpose of model 13 - to simulate the shallow riparian groundwater environment - is the most appropriate for examining local-scale surface-water/groundwater interactions. The basin-scale models, however, are also important in understanding the large-scale water balances between the aquifers and the surface water. In the case of the Upper and Middle Rio Grande Valley, models 6, 10, and 12 are the most accurate and current groundwater models available.
Ram K. Deo; Matthew B. Russell; Grant M. Domke; Christopher W. Woodall; Michael J. Falkowski; Warren B. Cohen
2017-01-01
The publicly accessible archive of Landsat imagery and increasing regional-scale LiDAR acquisitions offer an opportunity to periodically estimate aboveground forest biomass (AGB) from 1990 to the present to alignwith the reporting needs ofNationalGreenhouseGas Inventories (NGHGIs). This study integrated Landsat time-series data, a state-wide LiDAR dataset, and a recent...
Time-localized wavelet multiple regression and correlation
NASA Astrophysics Data System (ADS)
Fernández-Macho, Javier
2018-02-01
This paper extends wavelet methodology to handle comovement dynamics of multivariate time series via moving weighted regression on wavelet coefficients. The concept of wavelet local multiple correlation is used to produce one single set of multiscale correlations along time, in contrast with the large number of wavelet correlation maps that need to be compared when using standard pairwise wavelet correlations with rolling windows. Also, the spectral properties of weight functions are investigated and it is argued that some common time windows, such as the usual rectangular rolling window, are not satisfactory on these grounds. The method is illustrated with a multiscale analysis of the comovements of Eurozone stock markets during this century. It is shown how the evolution of the correlation structure in these markets has been far from homogeneous both along time and across timescales featuring an acute divide across timescales at about the quarterly scale. At longer scales, evidence from the long-term correlation structure can be interpreted as stable perfect integration among Euro stock markets. On the other hand, at intramonth and intraweek scales, the short-term correlation structure has been clearly evolving along time, experiencing a sharp increase during financial crises which may be interpreted as evidence of financial 'contagion'.
Rapp, Thomas; Lacey, Loretto; Ousset, Pierre-Jean; Cowppli-Bony, Pascale; Vellas, Bruno; Orgogozo, Jean-Marc
2015-07-01
It is crucial to define health policies that target patients with the highest needs. In France, public financial support is provided to dependent patients: it can be used to finance informal care time and nonmedical care use. Eligibility for public subsidies and reimbursement of costs is associated with a specific tool: the autonomie gérontologie groupes iso-ressources (AGGIR) scale score. Our objective was to explore whether patients with Alzheimer's disease who are eligible for public financial support have greater needs than do noneligible patients. Using data from the Dépendance des patients atteints de la maladie d'Alzheimer en France study, we calculated nonmedical care expenditures (in €) using microcosting methods and informal care time demand (hours/month) using the Resource Use in Dementia questionnaire. We measured the burden associated with informal care provision with Zarit Burden Interview. We used a modified two-part model to explore the correlation between public financial support eligibility and these three variables. We find evidence of higher informal care use, higher informal caregivers' burden, and higher care expenditures when patients have an AGGIR scale score corresponding to public financial support eligibility. The AGGIR scale is useful to target patients with the highest costs and needs. Given our results, public subsidies could be used to further sustain informal caregivers networks by financing programs dedicated to lowering informal caregivers' burden. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Agroforestry, climate change, and food security
USDA-ARS?s Scientific Manuscript database
Successfully addressing global climate change effects on agriculture will require a holistic, sustained approach incorporating a suite of strategies at multiple spatial scales and time horizons. In the USA of the 1930’s, bold and innovative leadership at high levels of government was needed to enact...
Kirchner, James W.; Neal, Colin
2013-01-01
The chemical dynamics of lakes and streams affect their suitability as aquatic habitats and as water supplies for human needs. Because water quality is typically monitored only weekly or monthly, however, the higher-frequency dynamics of stream chemistry have remained largely invisible. To illuminate a wider spectrum of water quality dynamics, rainfall and streamflow were sampled in two headwater catchments at Plynlimon, Wales, at 7-h intervals for 1–2 y and weekly for over two decades, and were analyzed for 45 solutes spanning the periodic table from H+ to U. Here we show that in streamflow, all 45 of these solutes, including nutrients, trace elements, and toxic metals, exhibit fractal 1/fα scaling on time scales from hours to decades (α = 1.05 ± 0.15, mean ± SD). We show that this fractal scaling can arise through dispersion of random chemical inputs distributed across a catchment. These 1/f time series are non–self-averaging: monthly, yearly, or decadal averages are approximately as variable, one from the next, as individual measurements taken hours or days apart, defying naive statistical expectations. (By contrast, stream discharge itself is nonfractal, and self-averaging on time scales of months and longer.) In the solute time series, statistically significant trends arise much more frequently, on all time scales, than one would expect from conventional t statistics. However, these same trends are poor predictors of future trends—much poorer than one would expect from their calculated uncertainties. Our results illustrate how 1/f time series pose fundamental challenges to trend analysis and change detection in environmental systems. PMID:23842090
NASA Astrophysics Data System (ADS)
Kirchner, James W.; Neal, Colin
2013-07-01
The chemical dynamics of lakes and streams affect their suitability as aquatic habitats and as water supplies for human needs. Because water quality is typically monitored only weekly or monthly, however, the higher-frequency dynamics of stream chemistry have remained largely invisible. To illuminate a wider spectrum of water quality dynamics, rainfall and streamflow were sampled in two headwater catchments at Plynlimon, Wales, at 7-h intervals for 1-2 y and weekly for over two decades, and were analyzed for 45 solutes spanning the periodic table from H+ to U. Here we show that in streamflow, all 45 of these solutes, including nutrients, trace elements, and toxic metals, exhibit fractal 1/fα scaling on time scales from hours to decades (α = 1.05 ± 0.15, mean ± SD). We show that this fractal scaling can arise through dispersion of random chemical inputs distributed across a catchment. These 1/f time series are non-self-averaging: monthly, yearly, or decadal averages are approximately as variable, one from the next, as individual measurements taken hours or days apart, defying naive statistical expectations. (By contrast, stream discharge itself is nonfractal, and self-averaging on time scales of months and longer.) In the solute time series, statistically significant trends arise much more frequently, on all time scales, than one would expect from conventional t statistics. However, these same trends are poor predictors of future trends-much poorer than one would expect from their calculated uncertainties. Our results illustrate how 1/f time series pose fundamental challenges to trend analysis and change detection in environmental systems.
NASA Astrophysics Data System (ADS)
Philipp, Andy; Kerl, Florian; Büttner, Uwe; Metzkes, Christine; Singer, Thomas; Wagner, Michael; Schütze, Niels
2016-05-01
In recent years, the Free State of Saxony (Eastern Germany) was repeatedly hit by both extensive riverine flooding, as well as flash flood events, emerging foremost from convective heavy rainfall. Especially after a couple of small-scale, yet disastrous events in 2010, preconditions, drivers, and methods for deriving flash flood related early warning products are investigated. This is to clarify the feasibility and the limits of envisaged early warning procedures for small catchments, hit by flashy heavy rain events. Early warning about potentially flash flood prone situations (i.e., with a suitable lead time with regard to required reaction-time needs of the stakeholders involved in flood risk management) needs to take into account not only hydrological, but also meteorological, as well as communication issues. Therefore, we propose a threefold methodology to identify potential benefits and limitations in a real-world warning/reaction context. First, the user demands (with respect to desired/required warning products, preparation times, etc.) are investigated. Second, focusing on small catchments of some hundred square kilometers, two quantitative precipitation forecasts are verified. Third, considering the user needs, as well as the input parameter uncertainty (i.e., foremost emerging from an uncertain QPF), a feasible, yet robust hydrological modeling approach is proposed on the basis of pilot studies, employing deterministic, data-driven, and simple scoring methods.
NASA Astrophysics Data System (ADS)
Malakhova, Valentina V.; Eliseev, Alexey V.
2017-10-01
Climate warming may lead to degradation of the subsea permafrost developed during Pleistocene glaciations and release methane from the hydrates, which are stored in this permafrost. It is important to quantify time scales at which this release is plausible. While, in principle, such time scale might be inferred from paleoarchives, this is hampered by considerable uncertainty associated with paleodata. In the present paper, to reduce such uncertainty, one-dimensional simulations with a model for thermal state of subsea sediments forced by the data obtained from the ice core reconstructions are performed. It is shown that heat propagates in the sediments with a time scale of ∼ 10-20 kyr. This time scale is longer than the present interglacial and is determined by the time needed for heat penetration in the unfrozen part of thick sediments. We highlight also that timings of shelf exposure during oceanic regressions and flooding during transgressions are important for simulating thermal state of the sediments and methane hydrates stability zone (HSZ). These timings should be resolved with respect to the contemporary shelf depth (SD). During glacial cycles, the temperature at the top of the sediments is a major driver for moving the HSZ vertical boundaries irrespective of SD. In turn, pressure due to oceanic water is additionally important for SD ≥ 50 m. Thus, oceanic transgressions and regressions do not instantly determine onsets of HSZ and/or its disappearance. Finally, impact of initial conditions in the subsea sediments is lost after ∼ 100 kyr. Our results are moderately sensitive to intensity of geothermal heat flux.
NASA Astrophysics Data System (ADS)
Patel, Ravi A.; Perko, Janez; Jacques, Diederik
2017-04-01
Often, especially in the disciplines related to natural porous media, such as for example vadoze zone or aquifer hydrology or contaminant transport, the relevant spatial and temporal scales on which we need to provide information is larger than the scale where the processes actually occur. Usual techniques used to deal with these problems assume the existence of a REV. However, in order to understand the behavior on larger scales it is important to downscale the problem onto the relevant scale of the processes. Due to the limitations of resources (time, memory) the downscaling can only be made up to the certain lower scale. At this lower scale still several scales may co-exist - the scale which can be explicitly described and a scale which needs to be conceptualized by effective properties. Hence, models which are supposed to provide effective properties on relevant scales should therefor be flexible enough to represent complex pore-structure by explicit geometry on one side, and differently defined processes (e.g. by the effective properties) which emerge on lower scale. In this work we present the state-of-the-art lattice Boltzmann method based simulation tool applicable to advection-diffusion equation coupled to geochemical processes. The lattice Boltzmann transport solver can be coupled with an external geochemical solver which allows to account for a wide range of geochemical reaction networks through thermodynamic databases. The applicability to multiphase systems is ongoing. We provide several examples related to the calculation of an effective diffusion properties, permeability and effective reaction rate based on a continuum scale based on the pore scale geometry.
Goswami, Prashant; Nishad, Shiv Narayan
2015-01-01
Assessment and policy design for sustainability in primary resources like arable land and water need to adopt long-term perspective; even small but persistent effects like net export of water may influence sustainability through irreversible losses. With growing consumption, this virtual water trade has become an important element in the water sustainability of a nation. We estimate and contrast the virtual (embedded) water trades of two populous nations, India and China, to present certain quantitative measures and time scales. Estimates show that export of embedded water alone can lead to loss of water sustainability. With the current rate of net export of water (embedded) in the end products, India is poised to lose its entire available water in less than 1000 years; much shorter time scales are implied in terms of water for production. The two cases contrast and exemplify sustainable and non-sustainable virtual water trade in long term perspective. PMID:25790964
Acoustic streaming: an arbitrary Lagrangian-Eulerian perspective.
Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco
2017-08-25
We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid-structure interaction problems in microacoustofluidic devices. After the formulation's exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches.
Acoustic streaming: an arbitrary Lagrangian–Eulerian perspective
Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco
2017-01-01
We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid–structure interaction problems in microacoustofluidic devices. After the formulation’s exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches. PMID:29051631
Amati, Rebecca; Hannawa, Annegret F
2015-01-01
Communication is undoubtedly a critical element of competent end-of-life care. However, physicians commonly lack communication skills in this particular care context. Theoretically grounded, evidence-based guidelines are needed to enhance physicians' communication with patients and their families in this important time of their lives. To address this need, this study tests and validates a Contradictions in End-of-Life Communication (CEOLC) scale, which disentangles the relational contradictions physicians commonly experience when communicating with end-of-life patients. Exploratory factors analysis confirmed the presence of eight physician-perceived dialectical tensions, reflecting three latent factors of (1) integration, (2) expression, and (3) dominance. Furthermore, a number of significant intercultural differences were found in cross-cultural comparisons of the scale in U.S., Swiss, and Italian physician samples. Thus, this investigation introduces a heuristic assessment tool that aids a better understanding of the dialectical contradictions physicians experience in their interactions with end-of-life patients. The CEOLC scale can be used to gather empirical evidence that may eventually support the development of evidence-based guidelines and skills training toward improved end-of-life care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Weizhao; Ren, Huaqing; Wang, Zequn
2016-10-19
An integrated computational materials engineering method is proposed in this paper for analyzing the design and preforming process of woven carbon fiber composites. The goal is to reduce the cost and time needed for the mass production of structural composites. It integrates the simulation methods from the micro-scale to the macro-scale to capture the behavior of the composite material in the preforming process. In this way, the time consuming and high cost physical experiments and prototypes in the development of the manufacturing process can be circumvented. This method contains three parts: the micro-scale representative volume element (RVE) simulation to characterizemore » the material; the metamodeling algorithm to generate the constitutive equations; and the macro-scale preforming simulation to predict the behavior of the composite material during forming. The results show the potential of this approach as a guidance to the design of composite materials and its manufacturing process.« less
Investigation of the near subsurface using acoustic to seismic coupling
USDA-ARS?s Scientific Manuscript database
Agricultural, hydrological and civil engineering applications have realized a need for information of the near subsurface over large areas. In order to obtain this spatially distributed data over such scales, the measurement technique must be highly mobile with a short acquisition time. Therefore, s...
Isolation predicts compositional change after discrete disturbances in a global meta-study
USDA-ARS?s Scientific Manuscript database
Globally, anthropogenic disturbances are occurring at unprecedented rates and over extensive spatial and temporal scales. Human activities also affect natural disturbances, prompting shifts in their timing and intensities. Thus, there is an urgent need to understand and predict the response of ecosy...
Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.
2013-12-01
This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.
NASA Astrophysics Data System (ADS)
Del Rio Amador, Lenin; Lovejoy, Shaun
2017-04-01
Over the past ten years, a key advance in our understanding of atmospheric variability is the discovery that between the weather and climate regime lies an intermediate "macroweather" regime, spanning the range of scales from ≈10 days to ≈30 years. Macroweather statistics are characterized by two fundamental symmetries: scaling and the factorization of the joint space-time statistics. In the time domain, the scaling has low intermittency with the additional property that successive fluctuations tend to cancel. In space, on the contrary the scaling has high (multifractal) intermittency corresponding to the existence of different climate zones. These properties have fundamental implications for macroweather forecasting: a) the temporal scaling implies that the system has a long range memory that can be exploited for forecasting; b) the low temporal intermittency implies that mathematically well-established (Gaussian) forecasting techniques can be used; and c), the statistical factorization property implies that although spatial correlations (including teleconnections) may be large, if long enough time series are available, they are not necessarily useful in improving forecasts. Theoretically, these conditions imply the existence of stochastic predictability limits in our talk, we show that these limits apply to GCM's. Based on these statistical implications, we developed the Stochastic Seasonal and Interannual Prediction System (StocSIPS) for the prediction of temperature from regional to global scales and from one month to many years horizons. One of the main components of StocSIPS is the separation and prediction of both the internal and externally forced variabilities. In order to test the theoretical assumptions and consequences for predictability and predictions, we use 41 different CMIP5 model outputs from preindustrial control runs that have fixed external forcings: whose variability is purely internally generated. We first show that these statistical assumptions hold with relatively good accuracy and then we performed hindcasts at global and regional scales from monthly to annual time resolutions using StocSIPS. We obtained excellent agreement between the hindcast Mean Square Skill Score (MSSS) and the theoretical stochastic limits. We also show the application of StocSIPS to the prediction of average global temperature and compare our results with those obtained using multi-model ensemble approaches. StocSIPS has numerous advantages including a) higher MSSS for large time horizons, b) the from convergence to the real - not model - climate, c) much higher computational speed, d) no need for data assimilation, e) no ad hoc post processing and f) no need for downscaling.
Dimensionless Numbers For Morphological, Thermal And Biogeochemical Controls Of Hyporheic Processes
NASA Astrophysics Data System (ADS)
Bellin, Alberto; Marzadri, Alessandra; Tonina, Daniele
2013-04-01
Transport of solutes and heat within the hyporheic zone are interface processes that gained growing attention in the last decade, when several modelling strategies have been proposed, mainly at the local or reach scale. We propose to upscale local hyporheic biogeochemical processes to reach and network scales by means of a Lagrangian modelling framework, which allows to consider the impact of the flow structure on the processes modelled. This analysis shows that geochemical processes can be parametrized through two new Damköhler numbers, DaO, and DaT. DaO = ?up,50-?lim is defined as the ratio between the median hyporheic residence time, ?up,50 and the time of consuming dissolved oxygen to a prescribed threshold concentration, ?lim, below which reductive reactions are activated. It quantifies the biogeochemical status of the hyporheic zone and could be a metric for upscaling local hyporheic biogeochemical processes to reach and river-network scale processes. In addition, ?up,50 is the time scale of hyporheic advection; while ?lim is the representative time scale of biogeochemical reactions and indicates the distance along the streamline, measured as the time needed to travel that distance, that a particle of water travels before the dissolved oxygen concentration declines to [DO]lim, the value at which denitrification is activated. We show that DaO is representative of the redox status and indicates whether the hyporheic zone is a source or a sink of nitrate. Values of DaO larger than 1 indicate prevailing anaerobic conditions, whereas values smaller than 1 prevailing aerobic conditions. Similarly, DaT quantifies the importance of the temperature daily oscillations of the stream water on the hyporheic environment. It is defined as the ratio between ?up,50, and the time limit at which the ratio between the amplitude of the temperature oscillation within the hyporheic zone (evaluated along the streamline) and in the stream water is smaller than e-1. We show that values of DaT > 1 indicate a thermally stable hyporheic zone, where organism metabolism is not influenced by surface water thermal oscillations and biogeochemical reaction rates depend on the mean daily stream water temperature. Values smaller than 1 suggest that organisms need to adapt to the daily thermal variations and biogeochemical reaction rates will depend on the daily fluctuations induced by stream water.
The scaling of geographic ranges: implications for species distribution models
Yackulic, Charles B.; Ginsberg, Joshua R.
2016-01-01
There is a need for timely science to inform policy and management decisions; however, we must also strive to provide predictions that best reflect our understanding of ecological systems. Species distributions evolve through time and reflect responses to environmental conditions that are mediated through individual and population processes. Species distribution models that reflect this understanding, and explicitly model dynamics, are likely to give more accurate predictions.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Chopping Time of the FPU {α }-Model
NASA Astrophysics Data System (ADS)
Carati, A.; Ponno, A.
2018-03-01
We study, both numerically and analytically, the time needed to observe the breaking of an FPU α -chain in two or more pieces, starting from an unbroken configuration at a given temperature. It is found that such a "chopping" time is given by a formula that, at low temperatures, is of the Arrhenius-Kramers form, so that the chain does not break up on an observable time-scale. The result explains why the study of the FPU problem is meaningful also in the ill-posed case of the α -model.
NASA Astrophysics Data System (ADS)
Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.
2009-12-01
Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).
NASA Astrophysics Data System (ADS)
Gouveia, C. M.; Trigo, R. M.; Beguería, S.; Vicente-Serrano, S. M.
2017-04-01
The present work analyzes the drought impacts on vegetation over the entire Mediterranean basin, with the purpose of determining the vegetation communities, regions and seasons at which vegetation is driven by drought. Our approach is based on the use of remote sensing data and a multi-scalar drought index. Correlation maps between fields of monthly Normalized Difference Vegetation Index (NDVI) and the Standardized Precipitation-Evapotranspiration Index (SPEI) at different time scales (1-24 months) were computed for representative months of winter (Feb), spring (May), summer (Aug) and fall (Nov). Results for the period from 1982 to 2006 show large areas highly controlled by drought, although presenting high spatial and seasonal differences, with a maximum influence in August and a minimum in February. The highest correlation values are observed in February for 3 months' time scale and in May for 6 and 12 months. The higher control of drought on vegetation in February and May is obtained mainly over the drier vegetation communities (Mediterranean Dry and Desertic) at shorter time scales (3 to 9 months). Additionally, in February the impact of drought on vegetation is lower for Temperate Oceanic and Continental vegetation types and takes place at longer time scales (18-24). The dependence of drought time-scale response with water balance, as obtained through a simple difference between precipitation and reference evapotranspiration, varies with vegetation communities. During February and November low water balance values correspond to shorter time scales over dry vegetation communities, whereas high water balance values implies longer time scales over Temperate Oceanic and Continental areas. The strong control of drought on vegetation observed for Mediterranean Dry and Desertic vegetation types located over areas with high negative values of water balance emphasizes the need for an early warning drought system covering the entire Mediterranean basin. We are confident that these results will provide a useful tool for drought management plans and play a relevant role in mitigating the impact of drought episodes.
Coupled hydrological and geochemical process evolution at the Landscape Evolution Observatory
NASA Astrophysics Data System (ADS)
Troch, P. A. A.
2015-12-01
Predictions of hydrologic and biogeochemical responses to natural and anthropogenic forcing at the landscape scale are highly uncertain due to the effects of heterogeneity on the scaling of reaction, flow and transport phenomena. The physical, chemical and biological structures and processes controlling reaction, flow and transport in natural landscapes interact at multiple space and time scales and are difficult to quantify. The current paradigm of hydrological and geochemical theory is that process descriptions derived from observations at small scales in controlled systems can be applied to predict system response at much larger scales, as long as some 'equivalent' or 'effective' values of the scale-dependent parameters can be identified. Furthermore, natural systems evolve in time in a way that is hard to observe in short-run laboratory experiments or in natural landscapes with unknown initial conditions and time-variant forcing. The spatial structure of flow pathways along hillslopes determines the rate, extent and distribution of geochemical reactions (and biological colonization) that drive weathering, the transport and precipitation of solutes and sediments, and the further evolution of soil structure. The resulting evolution of structures and processes, in turn, produces spatiotemporal variability of hydrological states and flow pathways. There is thus a need for experimental research to improve our understanding of hydrology-biogeochemistry interactions and feedbacks at appropriate spatial scales larger than laboratory soil column experiments. Such research is complicated in real-world settings because of poorly constrained impacts of initial conditions, climate variability, ecosystems dynamics, and geomorphic evolution. The Landscape Evolution Observatory (LEO) at Biosphere 2 offers a unique research facility that allows real-time observations of incipient hydrologic and biogeochemical response under well-constrained initial conditions and climate forcing. The LEO allows to close the water, carbon and energy budgets at hillslope scales, thereby enabling elucidation of the tight coupling between the time water spends along subsurface flow paths and geochemical weathering reactions, including the feedbacks between flow and pedogenesis.
Solar energy for electricity and fuels.
Inganäs, Olle; Sundström, Villy
2016-01-01
Solar energy conversion into electricity by photovoltaic modules is now a mature technology. We discuss the need for materials and device developments using conventional silicon and other materials, pointing to the need to use scalable materials and to reduce the energy payback time. Storage of solar energy can be achieved using the energy of light to produce a fuel. We discuss how this can be achieved in a direct process mimicking the photosynthetic processes, using synthetic organic, inorganic, or hybrid materials for light collection and catalysis. We also briefly discuss challenges and needs for large-scale implementation of direct solar fuel technologies.
Lattice Boltzmann for Airframe Noise Predictions
NASA Technical Reports Server (NTRS)
Barad, Michael; Kocheemoolayil, Joseph; Kiris, Cetin
2017-01-01
Increase predictive use of High-Fidelity Computational Aero- Acoustics (CAA) capabilities for NASA's next generation aviation concepts. CFD has been utilized substantially in analysis and design for steady-state problems (RANS). Computational resources are extremely challenged for high-fidelity unsteady problems (e.g. unsteady loads, buffet boundary, jet and installation noise, fan noise, active flow control, airframe noise, etc) ü Need novel techniques for reducing the computational resources consumed by current high-fidelity CAA Need routine acoustic analysis of aircraft components at full-scale Reynolds number from first principles Need an order of magnitude reduction in wall time to solution!
Lottig, Noah R.; Tan, Pang-Ning; Wagner, Tyler; Cheruvelil, Kendra Spence; Soranno, Patricia A.; Stanley, Emily H.; Scott, Caren E.; Stow, Craig A.; Yuan, Shuai
2017-01-01
Ecology has a rich history of studying ecosystem dynamics across time and space that has been motivated by both practical management needs and the need to develop basic ideas about pattern and process in nature. In situations in which both spatial and temporal observations are available, similarities in temporal behavior among sites (i.e., synchrony) provide a means of understanding underlying processes that create patterns over space and time. We used pattern analysis algorithms and data spanning 22–25 yr from 601 lakes to ask three questions: What are the temporal patterns of lake water clarity at sub‐continental scales? What are the spatial patterns (i.e., geography) of synchrony for lake water clarity? And, what are the drivers of spatial and temporal patterns in lake water clarity? We found that the synchrony of water clarity among lakes is not spatially structured at sub‐continental scales. Our results also provide strong evidence that the drivers related to spatial patterns in water clarity are not related to the temporal patterns of water clarity. This analysis of long‐term patterns of water clarity and possible drivers contributes to understanding of broad‐scale spatial patterns in the geography of synchrony and complex relationships between spatial and temporal patterns across ecosystems.
Perspectives: A Challenging Patriotism
ERIC Educational Resources Information Center
Boyte, Harry C.
2012-01-01
In a time of alarm about the poisoning of electoral politics, public passions inflamed by sophisticated techniques of mass polarization, and fears that the country is losing control of its collective future, higher education is called upon to take leadership in "reinventing citizenship." It needs to respond to that call on a scale unprecedented in…
The Need for a Harmonized Repository for Next-Generation Human Activity Data
Multi-tiered human time-activity-location data can inform many efforts to describe human exposures to air pollutants and other chemicals on a range of temporal and spatial scales. In the last decade, EPA's Consolidated Human Activity Database (CHAD) has served as a harmonized rep...
Irrigation analysis based on long-term weather data
USDA-ARS?s Scientific Manuscript database
Irrigation-management is based upon delivery of water to a crop in the correct amount and time, and the crop’s water need is determined by calculating evapotranspiration (ET) using weather data. In 1994 an ET-network was established in the Texas High Plains to manage irrigation on a regional scale. ...
Microscale and Compact Scale Chemistry in South Africa
ERIC Educational Resources Information Center
Taylor, Warwick
2011-01-01
Reduced costs and greater time efficiency are often quoted among the main benefits of microscale chemistry. Do these benefits outweigh some of the limitations and difficulties faced in terms of students needing to develop new manipulation skills, and teachers requiring training in terms of implementation and management? This article describes a…
Bioactivity profiling using high-throughput in vitro assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in hig...
Anomalous diffusion for bed load transport with a physically-based model
NASA Astrophysics Data System (ADS)
Fan, N.; Singh, A.; Foufoula-Georgiou, E.; Wu, B.
2013-12-01
Diffusion of bed load particles shows both normal and anomalous behavior for different spatial-temporal scales. Understanding and quantifying these different types of diffusion is important not only for the development of theoretical models of particle transport but also for practical purposes, e.g., river management. Here we extend a recently proposed physically-based model of particle transport by Fan et al. [2013] to further develop an Episodic Langevin equation (ELE) for individual particle motion which reproduces the episodic movement (start and stop) of sediment particles. Using the proposed ELE we simulate particle movements for a large number of uniform size particles, incorporating different probability distribution functions (PDFs) of particle waiting time. For exponential PDFs of waiting times, particles reveal ballistic motion in short time scales and turn to normal diffusion at long time scales. The PDF of simulated particle travel distances also shows a change in its shape from exponential to Gamma to Gaussian with a change in timescale implying different diffusion scaling regimes. For power-law PDF (with power - μ) of waiting times, the asymptotic behavior of particles at long time scales reveals both super-diffusion and sub-diffusion, however, only very heavy tailed waiting times (i.e. 1.0 < μ < 1.5) could result in sub-diffusion. We suggest that the contrast between our results and previous studies (for e.g., studies based on fractional advection-diffusion models of thin/heavy tailed particle hops and waiting times) results could be due the assumption in those studies that the hops are achieved instantaneously, but in reality, particles achieve their hops within finite times (as we simulate here) instead of instantaneously, even if the hop times are much shorter than waiting times. In summary, this study stresses on the need to rethink the alternative models to the previous models, such as, fractional advection-diffusion equations, for studying the anomalous diffusion of bed load particles. The implications of these results for modeling sediment transport are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curry, Judith
This project addressed the challenge of providing weather and climate information to support the operation, management and planning for wind-energy systems. The need for forecast information is extending to longer projection windows with increasing penetration of wind power into the grid and also with diminishing reserve margins to meet peak loads during significant weather events. Maintenance planning and natural gas trading is being influenced increasingly by anticipation of wind generation on timescales of weeks to months. Future scenarios on decadal time scales are needed to support assessment of wind farm siting, government planning, long-term wind purchase agreements and the regulatorymore » environment. The challenge of making wind forecasts on these longer time scales is associated with a wide range of uncertainties in general circulation and regional climate models that make them unsuitable for direct use in the design and planning of wind-energy systems. To address this challenge, CFAN has developed a hybrid statistical/dynamical forecasting scheme for delivering probabilistic forecasts on time scales from one day to seven months using what is arguably the best forecasting system in the world (European Centre for Medium Range Weather Forecasting, ECMWF). The project also provided a framework to assess future wind power through developing scenarios of interannual to decadal climate variability and change. The Phase II research has successfully developed an operational wind power forecasting system for the U.S., which is being extended to Europe and possibly Asia.« less
Population dynamics in an intermittent refuge
NASA Astrophysics Data System (ADS)
Colombo, E. H.; Anteneodo, C.
2016-10-01
Population dynamics is constrained by the environment, which needs to obey certain conditions to support population growth. We consider a standard model for the evolution of a single species population density, which includes reproduction, competition for resources, and spatial spreading, while subject to an external harmful effect. The habitat is spatially heterogeneous, there existing a refuge where the population can be protected. Temporal variability is introduced by the intermittent character of the refuge. This scenario can apply to a wide range of situations, from a laboratory setting where bacteria can be protected by a blinking mask from ultraviolet radiation, to large-scale ecosystems, like a marine reserve where there can be seasonal fishing prohibitions. Using analytical and numerical tools, we investigate the asymptotic behavior of the total population as a function of the size and characteristic time scales of the refuge. We obtain expressions for the minimal size required for population survival, in the slow and fast time scale limits.
High-resolution regional climate model evaluation using variable-resolution CESM over California
NASA Astrophysics Data System (ADS)
Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.
2015-12-01
Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine-scale processes. This assessment is also relevant for addressing the scale limitation of current RCMs or VRGCMs when next-generation model resolution increases to ~10km and beyond.
Effects of surgical side and site on psychological symptoms following epilepsy surgery in adults.
Prayson, Brigid E; Floden, Darlene P; Ferguson, Lisa; Kim, Kevin H; Jehi, Lara; Busch, Robyn M
2017-03-01
This retrospective study examined the potential role of side and site of surgery in psychological symptom change after epilepsy surgery and determined the base rate of psychological change at the individual level. Two-hundred twenty-eight adults completed the Personality Assessment Inventory (PAI) before and after temporal (TLR; n=190) or frontal lobe resection (FLR; n=38). Repeated measures ANOVAs with bootstrapping examined differences in psychological outcome as a function of surgical site separately in patients who underwent left- versus right-sided resections. Individual's PAI score changes were then used to determine the prevalence of clinically meaningful postoperative symptom change. Following left-sided resections, there were significant group-by-time interactions on Somatic Complaints, Anxiety, and Anxiety Related Disorders. There was also a trend in this direction on the Depression scale. TLR patients endorsed greater preoperative symptoms than FLR patients on all of these scales, except the Somatic Complaints scale. After surgery, TLR patients reported symptom improvement on all four scales, while scores of FLR patients remained relatively stable over time. Endorsement of Mania-related symptoms increased in both TLR and FLR groups from pre-to post-surgical testing. Following right-sided resections, both groups endorsed symptom improvements on Somatic Complaints, Anxiety, and Depression scales following surgery. In addition, the TLR group endorsed more Mania-related symptoms than the FLR group regardless of time. Patterns of meaningful change in individual patients were generally consistent with group findings, with the most frequent improvements observed following TLR. However, there were a small subset of patients who reported symptom exacerbation after surgery. Our results suggest that surgical lateralization and localization are important factors in postoperative psychological outcome and highlight the importance of considering psychological change at the individual patient level. Further research is needed to identify potential risk factors for symptom exacerbation to aid in preoperative counseling and identify those patients most in need of postoperative psychological surveillance. Copyright © 2016 Elsevier Inc. All rights reserved.
Phillips, Lorraine J.; Reid-Arndt, Stephanie A.; Pak, Youngju
2010-01-01
Background Effective nonpharmacological interventions are needed to treat neuropsychiatric symptoms and improve quality of life for the 5.3 million Americans affected by dementia. Objective To test the effect of a storytelling program, TimeSlips, on communication, neuropsychiatric symptoms, and quality of life in long-term care residents with dementia. Method A quasi-experimental, two-group, repeated measures design was used to compare persons with dementia who were assigned to the twice-weekly, 6-week TimeSlips intervention (n = 28) or usual care (n = 28) group at baseline and postintervention at Weeks 7 and 10. Outcome measures included the Cornell Scale for Depression in Dementia, Neuropsychiatric Inventory-Nursing Home Version, Functional Assessment of Communication Skills, Quality of Life–AD, and Observed Emotion Rating Scale (this last measure was collected also at Weeks 3 and 6 during TimeSlips for the treatment group and during mealtime for the control group). Results Compared to the control group, the treatment group exhibited significantly higher pleasure at Week 3 (p < .001), Week 6 (p < .001), and Week 7 (p < .05). Small to moderate treatment effects were found for Week 7 Social Communication (d = .49) and Basic Needs Communication (d = .43). A larger effect was found for pleasure at Week 7 (d = .58). Discussion As expected given the engaging nature of the TimeSlips creative story-telling intervention, analyses revealed increased positive affect during and at 1-week post-intervention. In addition, perhaps associated with the intervention’s reliance on positive social interactions and verbal communication, participants evidenced improved communication skills. However, more frequent dosing and booster sessions of TimeSlips may be needed to show significant differences between treatment and control groups on long-term effects and other outcomes. PMID:21048483
Phillips, Lorraine J; Reid-Arndt, Stephanie A; Pak, Youngju
2010-01-01
Effective nonpharmacological interventions are needed to treat neuropsychiatric symptoms and to improve quality of life for the 5.3 million Americans affected by dementia. The purpose of this study was to test the effect of a storytelling program, TimeSlips, on communication, neuropsychiatric symptoms, and quality of life in long-term care residents with dementia. A quasi-experimental, two-group, repeated measures design was used to compare persons with dementia who were assigned to the twice-weekly, 6-week TimeSlips intervention group (n = 28) or usual care group (n = 28) at baseline and postintervention at Weeks 7 and 10. Outcome measures included the Cornell Scale for Depression in Dementia, the Neuropsychiatric Inventory-Nursing Home Version, the Functional Assessment of Communication Skills, the Quality of Life-Alzheimer's Disease, and the Observed Emotion Rating Scale (this last measure was collected also at Weeks 3 and 6 during TimeSlips for the treatment group and during mealtime for the control group). Compared with the control group, the treatment group exhibited significantly higher pleasure at Week 3 (p < .001), Week 6 (p < .001), and Week 7 (p < .05). Small to moderate treatment effects were found for Week 7 social communication (d = .49) and basic needs communication (d = .43). A larger effect was found for pleasure at Week 7 (d = .58). As expected, given the engaging nature of the TimeSlips creative storytelling intervention, analyses revealed increased positive affect during and at 1 week postintervention. In addition, perhaps associated with the intervention's reliance on positive social interactions and verbal communication, participants evidenced improved communication skills. However, more frequent dosing and booster sessions of TimeSlips may be needed to show significant differences between treatment and control groups on long-term effects and other outcomes.
Stochastic simulation and decadal prediction of hydroclimate in the Western Himalayas
NASA Astrophysics Data System (ADS)
Robertson, A. W.; Chekroun, M. D.; Cook, E.; D'Arrigo, R.; Ghil, M.; Greene, A. M.; Holsclaw, T.; Kondrashov, D. A.; Lall, U.; Lu, M.; Smyth, P.
2012-12-01
Improved estimates of climate over the next 10 to 50 years are needed for long-term planning in water resource and flood management. However, the task of effectively incorporating the results of climate change research into decision-making face a ``double conflict of scales'': the temporal scales of climate model projections are too long, while their usable spatial scales (global to planetary) are much larger than those needed for actual decision making (at the regional to local level). This work is designed to help tackle this ``double conflict'' in the context of water management over monsoonal Asia, based on dendroclimatic multi-century reconstructions of drought indices and river flows. We identify low-frequency modes of variability with time scales from interannual to interdecadal based on these series, and then generate future scenarios based on (a) empirical model decadal predictions, and (b) stochastic simulations generated with autoregressive models that reproduce the power spectrum of the data. Finally, we consider how such scenarios could be used to develop reservoir optimization models. Results will be presented based on multi-century Upper Indus river discharge reconstructions that exhibit a strong periodicity near 27 years that is shown to yield some retrospective forecasting skill over the 1700-2000 period, at a 15-yr yield time. Stochastic simulations of annual PDSI drought index values over the Upper Indus basin are constructed using Empirical Model Reduction; their power spectra are shown to be quite realistic, with spectral peaks near 5--8 years.
Mesoscale to Synoptic Scale Cloud Variability
NASA Technical Reports Server (NTRS)
Rossow, William B.
1998-01-01
The atmospheric circulation and its interaction with the oceanic circulation involve non-linear and non-local exchanges of energy and water over a very large range of space and time scales. These exchanges are revealed, in part, by the related variations of clouds, which occur on a similar range of scales as the atmospheric motions that produce them. Collection of comprehensive measurements of the properties of the atmosphere, clouds and surface allows for diagnosis of some of these exchanges. The use of a multi-satellite-network approach by the International Satellite Cloud Climatology Project (ISCCP) comes closest to providing complete coverage of the relevant range space and time scales over which the clouds, atmosphere and ocean vary. A nearly 15-yr dataset is now available that covers the range from 3 hr and 30 km to decade and planetary. This paper considers three topics: (1) cloud variations at the smallest scales and how they may influence radiation-cloud interactions, and (2) cloud variations at "moderate" scales and how they may cause natural climate variability, and (3) cloud variations at the largest scales and how they affect the climate. The emphasis in this discussion is on the more mature subject of cloud-radiation interactions. There is now a need to begin similar detailed diagnostic studies of water exchange processes.
Parallel Index and Query for Large Scale Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less
Dickson, Kim E; Kinney, Mary V; Moxon, Sarah G; Ashton, Joanne; Zaka, Nabila; Simen-Kapeu, Aline; Sharma, Gaurav; Kerber, Kate J; Daelmans, Bernadette; Gülmezoglu, A; Mathai, Matthews; Nyange, Christabel; Baye, Martina; Lawn, Joy E
2015-01-01
The Every Newborn Action Plan (ENAP) and Ending Preventable Maternal Mortality targets cannot be achieved without high quality, equitable coverage of interventions at and around the time of birth. This paper provides an overview of the methodology and findings of a nine paper series of in-depth analyses which focus on the specific challenges to scaling up high-impact interventions and improving quality of care for mothers and newborns around the time of birth, including babies born small and sick. The bottleneck analysis tool was applied in 12 countries in Africa and Asia as part of the ENAP process. Country workshops engaged technical experts to complete a tool designed to synthesise "bottlenecks" hindering the scale up of maternal-newborn intervention packages across seven health system building blocks. We used quantitative and qualitative methods and literature review to analyse the data and present priority actions relevant to different health system building blocks for skilled birth attendance, emergency obstetric care, antenatal corticosteroids (ACS), basic newborn care, kangaroo mother care (KMC), treatment of neonatal infections and inpatient care of small and sick newborns. The 12 countries included in our analysis account for the majority of global maternal (48%) and newborn (58%) deaths and stillbirths (57%). Our findings confirm previously published results that the interventions with the most perceived bottlenecks are facility-based where rapid emergency care is needed, notably inpatient care of small and sick newborns, ACS, treatment of neonatal infections and KMC. Health systems building blocks with the highest rated bottlenecks varied for different interventions. Attention needs to be paid to the context specific bottlenecks for each intervention to scale up quality care. Crosscutting findings on health information gaps inform two final papers on a roadmap for improvement of coverage data for newborns and indicate the need for leadership for effective audit systems. Achieving the Sustainable Development Goal targets for ending preventable mortality and provision of universal health coverage will require large-scale approaches to improving quality of care. These analyses inform the development of systematic, targeted approaches to strengthening of health systems, with a focus on overcoming specific bottlenecks for the highest impact interventions.
2015-01-01
Background The Every Newborn Action Plan (ENAP) and Ending Preventable Maternal Mortality targets cannot be achieved without high quality, equitable coverage of interventions at and around the time of birth. This paper provides an overview of the methodology and findings of a nine paper series of in-depth analyses which focus on the specific challenges to scaling up high-impact interventions and improving quality of care for mothers and newborns around the time of birth, including babies born small and sick. Methods The bottleneck analysis tool was applied in 12 countries in Africa and Asia as part of the ENAP process. Country workshops engaged technical experts to complete a tool designed to synthesise "bottlenecks" hindering the scale up of maternal-newborn intervention packages across seven health system building blocks. We used quantitative and qualitative methods and literature review to analyse the data and present priority actions relevant to different health system building blocks for skilled birth attendance, emergency obstetric care, antenatal corticosteroids (ACS), basic newborn care, kangaroo mother care (KMC), treatment of neonatal infections and inpatient care of small and sick newborns. Results The 12 countries included in our analysis account for the majority of global maternal (48%) and newborn (58%) deaths and stillbirths (57%). Our findings confirm previously published results that the interventions with the most perceived bottlenecks are facility-based where rapid emergency care is needed, notably inpatient care of small and sick newborns, ACS, treatment of neonatal infections and KMC. Health systems building blocks with the highest rated bottlenecks varied for different interventions. Attention needs to be paid to the context specific bottlenecks for each intervention to scale up quality care. Crosscutting findings on health information gaps inform two final papers on a roadmap for improvement of coverage data for newborns and indicate the need for leadership for effective audit systems. Conclusions Achieving the Sustainable Development Goal targets for ending preventable mortality and provision of universal health coverage will require large-scale approaches to improving quality of care. These analyses inform the development of systematic, targeted approaches to strengthening of health systems, with a focus on overcoming specific bottlenecks for the highest impact interventions. PMID:26390820
Catchment dynamics and social response during flash floods
NASA Astrophysics Data System (ADS)
Creutin, J. D.; Lutoff, C.; Ruin, I.; Scolobig, A.; Créton-Cazanave, L.
2009-04-01
The objective of this study is to examine how the current techniques for flash-flood monitoring and forecasting can meet the requirements of the population at risk to evaluate the severity of the flood and anticipate its danger. To this end, we identify the social response time for different social actions in the course of two well studied flash flood events which occurred in France and Italy. We introduce a broad characterization of the event management activities into three types according to their main objective (information, organisation and protection). The activities are also classified into three other types according to the scale and nature of the human group involved (individuals, communities and institutions). The conclusions reached relate to i) the characterisation of the social responses according to watershed scale and to the information available, and ii) to the appropriateness of the existing surveillance and forecasting tools to support the social responses. Our results suggest that representing the dynamics of the social response with just one number representing the average time for warning a population is an oversimplification. It appears that the social response time exhibits a parallel with the hydrological response time, by diminishing in time with decreasing size of the relevant watershed. A second result is that the human groups have different capabilities of anticipation apparently based on the nature of information they use. Comparing watershed response times and social response times shows clearly that at scales of less than 100 km2, a number of actions were taken with response times comparable to the catchment response time. The implications for adapting the warning processes to social scales (individual or organisational scales) are considerable. At small scales and for the implied anticipation times, the reliable and high-resolution description of the actual rainfall field becomes the major source of information for decision-making processes such as deciding between evacuations or advising to stay home. This points to the need to improve the accuracy and quality control of real time radar rainfall data, especially for extreme flash flood generating storms.
Synthesis of User Needs for Arctic Sea Ice Predictions
NASA Astrophysics Data System (ADS)
Wiggins, H. V.; Turner-Bogren, E. J.; Sheffield Guy, L.
2017-12-01
Forecasting Arctic sea ice on sub-seasonal to seasonal scales in a changing Arctic is of interest to a diverse range of stakeholders. However, sea ice forecasting is still challenging due to high variability in weather and ocean conditions and limits to prediction capabilities; the science needs for observations and modeling are extensive. At a time of challenged science funding, one way to prioritize sea ice prediction efforts is to examine the information needs of various stakeholder groups. This poster will present a summary and synthesis of existing surveys, reports, and other literature that examines user needs for sea ice predictions. The synthesis will include lessons learned from the Sea Ice Prediction Network (a collaborative, multi-agency-funded project focused on seasonal Arctic sea ice predictions), the Sea Ice for Walrus Outlook (a resource for Alaska Native subsistence hunters and coastal communities, that provides reports on weather and sea ice conditions), and other efforts. The poster will specifically compare the scales and variables of sea ice forecasts currently available, as compared to what information is requested by various user groups.
NASA Astrophysics Data System (ADS)
Judt, Falko; Chen, Shuyi S.; Curcic, Milan
2016-06-01
The 2010 Deepwater Horizon oil spill in the Gulf of Mexico (GoM) was an environmental disaster, which highlighted the urgent need to predict the transport and dispersion of hydrocarbon. Although the variability of the atmospheric forcing plays a major role in the upper ocean circulation and transport of the pollutants, the air-sea interaction on various time scales is not well understood. This study provides a comprehensive overview of the atmospheric forcing and upper ocean response in the GoM from seasonal to diurnal time scales, using climatologies derived from long-term observations, in situ observations from two field campaigns, and a coupled model. The atmospheric forcing in the GoM is characterized by striking seasonality. In the summer, the time-average large-scale forcing is weak, despite occasional extreme winds associated with hurricanes. In the winter, the atmospheric forcing is much stronger, and dominated by synoptic variability on time scales of 3-7 days associated with winter storms and cold air outbreaks. The diurnal cycle is more pronounced during the summer, when sea breeze circulations affect the coastal regions and nighttime wind maxima occur over the offshore waters. Realtime predictions from a high-resolution atmosphere-wave-ocean coupled model were evaluated for both summer and winter conditions during the Grand LAgrangian Deployment (GLAD) in July-August 2012 and the Surfzone Coastal Oil Pathways Experiment (SCOPE) in November-December 2013. The model generally captured the variability of atmospheric forcing on all scales, but suffered from some systematic errors.
NASA Astrophysics Data System (ADS)
Ito, Akinori; Penner, Joyce E.
2005-06-01
Historical changes of black carbon (BC) and particulate organic matter (POM) emissions from biomass burning (BB) and fossil fuel (FF) burning are estimated from 1870 to 2000. A bottom-up inventory for open vegetation (OV) burning is scaled by a top-down estimate for the year 2000. Monthly and interannual variations are derived over the time period from 1979 to 2000 based on the TOMS satellite aerosol index (AI) and this global map. Prior to 1979, emissions are scaled to a CH4 emissions inventory based on land-use change. Biofuel (BF) emissions from a recent inventory for developing countries are scaled forward and backward in time using population statistics and crop production statistics. In developed countries, wood consumption data together with emission factors for cooking and heating practices are used for biofuel estimates. For fossil fuel use, we use fuel consumption data and specific emission factors for different fuel use categories to develop an inventory over 1950-2000, and emissions are scaled to a CO2 inventory prior to that time. Technology changes for emissions from the diesel transport sector are included. During the last decade of this time period, the BC and POM emissions from biomass burning (i.e., OV + BF) contribute a significant amount to the primary sources of BC and POM and are larger than those from FF. Thus 59% of the NH BC emissions and 90% of the NH POM emissions are from BB in 2000. Fossil fuel consumption technologies are needed prior to 1990 in order to improve estimates of fossil fuel emissions during the twentieth century. These results suggest that the aerosol emissions from biomass burning need to be represented realistically in climate change assessments. The estimated emissions are available on a 1° × 1° grid for global climate modeling studies of climate changes.
Classification of Animal Movement Behavior through Residence in Space and Time.
Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R
2017-01-01
Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can enhance our fine- and broad- scale understanding of animal movement ecology.
Scaling properties of Polish rain series
NASA Astrophysics Data System (ADS)
Licznar, P.
2009-04-01
Scaling properties as well as multifractal nature of precipitation time series have not been studied for local Polish conditions until recently due to lack of long series of high-resolution data. The first Polish study of precipitation time series scaling phenomena was made on the base of pluviograph data from the Wroclaw University of Environmental and Life Sciences meteorological station located at the south-western part of the country. The 38 annual rainfall records from years 1962-2004 were converted into digital format and transformed into a standard format of 5-minute time series. The scaling properties and multifractal character of this material were studied by means of several different techniques: power spectral density analysis, functional box-counting, probability distribution/multiple scaling and trace moment methods. The result proved the general scaling character of time series at the range of time scales ranging form 5 minutes up to at least 24 hours. At the same time some characteristic breaks at scaling behavior were recognized. It is believed that the breaks were artificial and arising from the pluviograph rain gauge measuring precision limitations. Especially strong limitations at the precision of low-intensity precipitations recording by pluviograph rain gauge were found to be the main reason for artificial break at energy spectra, as was reported by other authors before. The analysis of co-dimension and moments scaling functions showed the signs of the first-order multifractal phase transition. Such behavior is typical for dressed multifractal processes that are observed by spatial or temporal averaging on scales larger than the inner-scale of those processes. The fractal dimension of rainfall process support derived from codimension and moments scaling functions geometry analysis was found to be 0.45. The same fractal dimension estimated by means of the functional box-counting method was equal to 0.58. At the final part of the study implementation of double trace moment method allowed for estimation of local universal multifractal rainfall parameters (α=0.69; C1=0.34; H=-0.01). The research proved the fractal character of rainfall process support and multifractal character of the rainfall intensity values variability among analyzed time series. It is believed that scaling of local Wroclaw's rainfalls for timescales at the range from 24 hours up to 5 minutes opens the door for future research concerning for example random cascades implementation for daily precipitation totals disaggregation for smaller time intervals. The results of such a random cascades functioning in a form of 5 minute artificial rainfall scenarios could be of great practical usability for needs of urban hydrology, and design and hydrodynamic modeling of storm water and combined sewage conveyance systems.
An Eulerian time filtering technique to study large-scale transient flow phenomena
NASA Astrophysics Data System (ADS)
Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric
2009-10-01
Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.
Pauwels, Evelyn E J; Charlier, Caroline; De Bourdeaudhuij, Ilse; Lechner, Lilian; Van Hoof, Elke
2013-01-01
This study examines the care needs of rehabilitating breast cancer survivors and determines what sociodemographic and medical characteristics are associated with these care needs. A large-scale cross-sectional study (n = 465, response rate = 65%) was conducted among survivors who had ended primary treatment less than 6 months previously. Questionnaires were completed regarding participants' care needs, how these needs were met and the time and manner preferred for receiving information and support. Care needs regarding seven specific rehabilitation topics were assessed separately: (1) physical functioning, (2) psychological functioning, (3) self and body image, (4) sexuality, (5) relationship with partner, (6) relationship with others, and (7) work, return to work and social security. High unmet needs were reported across all topics. The time preferred for receiving information and support across most topics was the period of breast cancer treatment. The most popular sources of information and support were informative brochures, consultation with a psychologist, information sessions and an informative website. Younger age and lower income were associated with care needs after treatment. A valuable contribution is made to the literature on post-treatment care needs by comprehensively mapping unmet needs and the preferred time and source for meeting those needs. This study leads to greater awareness of the struggle facing breast cancer survivors and should guide those developing post-treatment interventions. As optimal tailoring to the needs of the target group is a prerequisite for success, preparatory needs assessment should be essential to the development of supportive interventions. Copyright © 2011 John Wiley & Sons, Ltd.
Extended-Range Forecasts at Climate Prediction Center: Current Status and Future Plans
NASA Astrophysics Data System (ADS)
Kumar, A.
2016-12-01
Motivated by a user need to provide forecast information on extended-range time-scales (i.e., weeks 2-4), in recent years Climate Prediction Center (CPC) has made considerable efforts towards developing and testing the feasibility for developing the required forecasts. The forecasts targeting this particular time-scale face a unique challenge in that while the forecast skill due to atmospheric initial conditions is small (because of rapid decay in the memory associated with the atmospheric initial conditions), short time averages for which forecasts are made do not benefit from skill associated with anomalous boundary conditions either. Despite these challenges, CPC has embarked on providing an experimental outlook for weeks 3-4 average. The talk will summarize the current status of CPC's current suite of extended-range forecast products, and further, will discuss some future plans.
Lunar occultation of Saturn. IV - Astrometric results from observations of the satellites
NASA Technical Reports Server (NTRS)
Dunham, D. W.; Elliot, J. L.
1978-01-01
The method of determining local lunar limb slopes, and the consequent time scale needed for diameter studies, from accurate occultation timings at two nearby telescopes is described. Results for photoelectric observations made at Mauna Kea Observatory during the occultation of Saturn's satellites on March 30, 1974, are discussed. Analysis of all observations of occultations of Saturn's satellites during 1974 indicates possible errors in the ephemerides of Saturn and its satellites.
Magnan, Morris A; Maklebust, Joann
2008-01-01
To evaluate the effect of Web-based Braden Scale training on the reliability and precision of pressure ulcer risk assessments made by registered nurses (RN) working in acute care settings. Pretest-posttest, 2-group, quasi-experimental design. Five hundred Braden Scale risk assessments were made on 102 acute care patients deemed to be at various levels of risk for pressure ulceration. Assessments were made by RNs working in acute care hospitals at 3 different medical centers where the Braden Scale was in regular daily use (2 medical centers) or new to the setting (1 medical center). The Braden Scale for Predicting Pressure Sore Risk was used to guide pressure ulcer risk assessments. A Web-based version of the Detroit Medical Center Braden Scale Computerized Training Module was used to teach nurses correct use of the Braden Scale and selection of risk-based pressure ulcer prevention interventions. In the aggregate, RN generated reliable Braden Scale pressure ulcer risk assessments 65% of the time after training. The effect of Web-based Braden Scale training on reliability and precision of assessments varied according to familiarity with the scale. With training, new users of the scale made reliable assessments 84% of the time and significantly improved precision of their assessments. The reliability and precision of Braden Scale risk assessments made by its regular users was unaffected by training. Technology-assisted Braden Scale training improved both reliability and precision of risk assessments made by new users of the scale, but had virtually no effect on the reliability or precision of risk assessments made by regular users of the instrument. Further research is needed to determine best approaches for improving reliability and precision of Braden Scale assessments made by its regular users.
Gartenmann, S J; Hofer, D; Wiedemeier, D; Sahrmann, P; Attin, T; Schmidlin, P R
2018-04-25
The Bologna reform resulted in a drastic restructuring of pre-clinical training courses at the University of Zurich. The aim of this study was to assess student pre-clinical scaling/root planning skills after just 8.5 hours of manual training. Three consecutive classes of dental students (n = 41; n = 34; n = 48) were tasked with removing lacquer concrement from the maxillary left canine on a typodont using Gracey and universal (Deppeler M23A) curettes. At baseline (prior to instruction), a timed 5-minute session of scaling/root planning was undertaken. The second scaling/root planning session was held immediately following training. Eight experienced dental hygienists and eight lay people served as positive and negative controls, using the same instruments and time limit, respectively. Instrumented teeth were collected, scanned and planimetrically analysed for the percentage of tooth surface cleaned. Statistical analyses were performed to assess the dental students' improvement after the training (Wilcoxon signed-rank test) and to compare it to that of laypeople and dental hygienists (Kruskal-Wallis rank sum test followed by Conover's post hoc test). At baseline, the dental students' mean scaling scores of the cleaned surfaces were not significantly different than those of laypeople (29.8%, 31.0%, 42% vs 27.9%). However, after 8.5 hours of manual training, the students' ability to clean the maxillary tooth improved significantly and they achieved mean removal values of 61.7%, 79.5% and 76% compared to the 67.4% (P < .001) of the experienced dental hygienists (Tables and ). There were no statistically significant differences between the scores achieved by students after training and those achieved by experienced dental hygienists. A shortened pre-clinical training time was sufficient for students to acquire the basic scaling/root planning skills needed in preparation for clinical training. Further research is needed to identify ways to help students consistently reach highest skill levels. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Large Scale Analyses and Visualization of Adaptive Amino Acid Changes Projects.
Vázquez, Noé; Vieira, Cristina P; Amorim, Bárbara S R; Torres, André; López-Fernández, Hugo; Fdez-Riverola, Florentino; Sousa, José L R; Reboiro-Jato, Miguel; Vieira, Jorge
2018-03-01
When changes at few amino acid sites are the target of selection, adaptive amino acid changes in protein sequences can be identified using maximum-likelihood methods based on models of codon substitution (such as codeml). Although such methods have been employed numerous times using a variety of different organisms, the time needed to collect the data and prepare the input files means that tens or hundreds of coding regions are usually analyzed. Nevertheless, the recent availability of flexible and easy to use computer applications that collect relevant data (such as BDBM) and infer positively selected amino acid sites (such as ADOPS), means that the entire process is easier and quicker than before. However, the lack of a batch option in ADOPS, here reported, still precludes the analysis of hundreds or thousands of sequence files. Given the interest and possibility of running such large-scale projects, we have also developed a database where ADOPS projects can be stored. Therefore, this study also presents the B+ database, which is both a data repository and a convenient interface that looks at the information contained in ADOPS projects without the need to download and unzip the corresponding ADOPS project file. The ADOPS projects available at B+ can also be downloaded, unzipped, and opened using the ADOPS graphical interface. The availability of such a database ensures results repeatability, promotes data reuse with significant savings on the time needed for preparing datasets, and effortlessly allows further exploration of the data contained in ADOPS projects.
High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing
NASA Astrophysics Data System (ADS)
Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.
2015-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.
[Job stress in agents at the socio-educational service centers in the state of Rio Grande do Sul].
Greco, Patrícia Bitencourt Toscani; Magnago, Tânia Solange Bosi de Souza; Beck, Carmem Lúcia Colomé; Urbanetto, Janete de Souza; Prochnow, Andrea
2013-03-01
The study was both to understand the association of work stress, socio-demographic and labor characteristics, habits and working conditions of the Socio-educational agents in the state of Rio Grande do Sul Brazil. It was a cross-sectional study with 881 agents of the Socio-educational Service Centers in the state of Rio Grande do Sul. The Brazilian version of the Job Stress Scale for assessment of work stress has been applied. Were classified in a situation of high strain 19.2% of the agents. The following factors were related to job stress, the need for counseling lack of leisure time, day shift work, dissatisfaction with the workplace, the need for absence from work due to health problems and insufficient scale work. There is a need to further research working conditions and execution of Occupational Health Service acting in order to minimize the effects of psychological demands at work of a socio-educational agent
The effects of heterogeneities on memory-dependent diffusion
NASA Astrophysics Data System (ADS)
Adib, Farhad; Neogi, P.
1993-07-01
Case II diffusion is often seen in glassy polymers, where the mass uptake in sorption is proportional to time t instead of sqrt{t}. A memory dependent diffusion is needed to explain such effects, where the relaxation function used to describe the memory effect has a characteristic time. The ratio of this time to the overall diffusion times is the diffusional Deborah number. Simple models show that case II results when the Deborah number is around one, that is, when the two time scales are comparable. Under investigation are the possible effects of the fact that the glassy polymers are heterogeneous over molecular scales. The averaging form given by DiMarzio and Sanchez has been used to obtain the averaged response. The calculated dynamics of sorption show that whereas case II is still observed, the long term tails change dramatically from the oscillatory to torpid, to chaotic, which are all observed in the experiments. The Deborah number defined here in a self-consistent manner collapses in those cases, but causes no other ill-effects.
Scalable room-temperature conversion of copper(II) hydroxide into HKUST-1 (Cu3 (btc)2).
Majano, Gerardo; Pérez-Ramírez, Javier
2013-02-20
Copper(II) hydroxide is converted directly to HKUST-1 (Cu(3) (btc)(2) ) after only 5 min at room-temperature in aqueous ethanolic solution without the need of additional solvents. Scale up to the kilogram scale does not influence porous properties yielding pure-phase product with a remarkable total surface area exceeding 1700 m(2) g(-1) featuring aggregates of nanometer-sized crystals (<600 nm) and extremely high space-time yields. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
1976-01-01
A million gallon-a-day sewage treatment plant in Huntington Beach, CA converts solid sewage to activated carbon which then treats incoming waste water. The plant is scaled up 100 times from a mobile unit NASA installed a year ago; another 100-fold scale-up will be required if technique is employed for widespread urban sewage treatment. This unique sewage-plant employed a serendipitous outgrowth of a need to manufacture activated carbon for rocket engine insulation. The process already exceeds new Environmental Protection Agency Standards Capital costs by 25% compared with conventional secondary treatment plants.
NASA Astrophysics Data System (ADS)
Pfister, Olivier
2017-05-01
When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.
Scaled Runge-Kutta algorithms for handling dense output
NASA Technical Reports Server (NTRS)
Horn, M. K.
1981-01-01
Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.
It’s just a matter of time before we see global climate models increasing their spatial resolution to that now typical of regional models. This encroachment brings in an urgent need for making regional NWP and climate models applicable at certain finer resolutions. One of the hin...
USDA-ARS?s Scientific Manuscript database
Comprehensive control of odors, hydrogen sulfide (H2S), ammonia (NH3) and odorous volatile organic compound (VOC) emissions associated with animal production is a critical need. Current methods utilizing wind tunnels and flux chambers for measurements of gaseous emissions from area sources such as f...
Economic and Workforce Development Program Annual Report, 2016
ERIC Educational Resources Information Center
California Community Colleges, Chancellor's Office, 2016
2016-01-01
The California Community Colleges, through the Economic and Workforce Development Program (EWD), continue to propel the California economy forward by providing students with skills to earn well-paying jobs. At the same time, EWD helps provide California companies with the talent they need to compete on a global scale. This annual report for…
ERIC Educational Resources Information Center
McCormack, Tim; Schnee, Emily; VanOra, Jason
2014-01-01
Background: The field of higher education abounds with qualitative research aimed at highlighting the needs, struggles, strengths, and motivations of academically struggling students. However, because of the small-scale nature of these studies, they rarely enter the public debate or impact institutional policy concerning access, remediation,…
Testing of a large-scale mechanical cottonseed delinter: results and improvements
USDA-ARS?s Scientific Manuscript database
Traditionally, mechanically delinted seed retains 1-2% residual linters whereas acid removes all linters and is primarily used for production of planting seed. The need for a process that cleans the lint off cottonseed has been of interest to inventers and the cotton industry for some time. Most of ...
ERIC Educational Resources Information Center
Henderson, Daphne Carr; Rupley, William H.; Nichols, Janet Alys; Nichols, William Dee; Rasinski, Timothy V.
2018-01-01
Current professional development efforts in writing at the secondary level have not resulted in student improvement on large-scale writing assessments. To maximize funding resources and instructional time, school leaders need a way to determine professional development content for writing teachers that aligns with specific student outcomes. The…
Comparing an annual and daily time-step model for predicting field-scale P loss
USDA-ARS?s Scientific Manuscript database
Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...
USDA-ARS?s Scientific Manuscript database
The abundance and composition of arthropod communities in agricultural landscapes vary across space and time, responding to environmental features, resources and behavioral cues. As “second-generation” bioenergy feedstocks continue to develop, knowledge is needed about the broader scale ecological i...
Implementing Intensive Intervention: How Do We Get There from Here?
ERIC Educational Resources Information Center
Zumeta, Rebecca O.
2015-01-01
Despite years of school reform intended to help students reach high academic standards, students with disabilities continue to struggle, suggesting a need for more intensive intervention as a part of special education and multi-tiered systems of support. At the same time, greater inclusion of students with disabilities in large-scale assessment,…
USDA-ARS?s Scientific Manuscript database
Novel carbon sequestration strategies such as large-scale land application of biochar may provide sustainable pathways to increase the terrestrial storage of carbon. Biochar has a long residence time in the soil and hence comprehensive studies are urgently needed to quantify the environmental impact...
A successful trap design for capturing large terrestrial snakes
Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer
2005-01-01
Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...
Scaling and modeling of turbulent suspension flows
NASA Technical Reports Server (NTRS)
Chen, C. P.
1989-01-01
Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.
NASA Technical Reports Server (NTRS)
Li,Hui; Faruque, Fazlay; Williams, Worth; Al-Hamdan, Mohammad; Luvall, Jeffrey; Crosson, William; Rickman, Douglas; Limaye, Ashutosh
2008-01-01
Aerosol optical depth (AOD), derived from satellite measurements using Moderate Resolution Imaging Spectrometer (MODIS), offers indirect estimates of particle matter. Research shows a significant positive correlation between satellite-based measurements of AOD and ground-based measurements of particulate matter with aerodynamic diameter less than or equal to 2.5 micrometers (PM2.5). In addition, satellite observations have also shown great promise in improving estimates of PM2.5 air quality surface. Research shows that correlations between AOD and ground PM2.5 are affected by a combination of many factors such as inherent characteristics of satellite observations, terrain, cloud cover, height of the mixing layer, and weather conditions, and thus might vary widely in different regions, different seasons, and even different days in a same location. Analysis of correlating AOD with ground measured PM2.5 on a day-to-day basis suggests the temporal scale, a number of immediate latest days for a given run's day, for their correlations needs to be considered to improve air quality surface estimates, especially when satellite observations are used in a real-time pollution system. The second reason is that correlation coefficients between AOD and ground PM2.5 cannot be predetermined and needs to be calculated for each day's run for a real-time system because the coefficients can vary over space and time. Few studies have been conducted to explore the optimal way to apply AOD data to improve model accuracies of PM2.5 surface estimation in a real-time air quality system. This paper discusses the best temporal scale to calculate the correlation of AOD and ground particle matter data to improve the results of pollution models in real-time system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piette, Mary Ann; Kiliccote, Sila; Ghatikar, Girish
2014-08-01
The need for and concepts behind demand response are evolving. As the electric system changes with more intermittent renewable electric supply systems, there is a need to allow buildings to provide more flexible demand. This paper presents results from field studies and pilots, as well as engineering estimates of the potential capabilities of fast load responsiveness in commercial buildings. We present a sector wide analysis of flexible loads in commercial buildings, which was conducted to improve resource planning and determine which loads to evaluate in future demonstrations. These systems provide important capabilities for future transactional systems. The field analysis ismore » based on results from California, plus projects in the northwest and east coast. End-uses considered include heating, ventilation, air conditioning and lighting. The timescales of control include day-ahead, as well as day-of, 10-minute ahead and even faster response. This technology can provide DR signals on different times scales to interact with responsive building loads. We describe the latency of the control systems in the building and the round trip communications with the wholesale grid operators.« less
NASA Astrophysics Data System (ADS)
Berg, Matthew D.; Marcantonio, Franco; Allison, Mead A.; McAlister, Jason; Wilcox, Bradford P.; Fox, William E.
2016-06-01
Rangelands cover a large portion of the earth's land surface and are undergoing dramatic landscape changes. At the same time, these ecosystems face increasing expectations to meet growing water supply needs. To address major gaps in our understanding of rangeland hydrologic function, we investigated historical watershed-scale runoff and sediment yield in a dynamic landscape in central Texas, USA. We quantified the relationship between precipitation and runoff and analyzed reservoir sediment cores dated using cesium-137 and lead-210 radioisotopes. Local rainfall and streamflow showed no directional trend over a period of 85 years, resulting in a rainfall-runoff ratio that has been resilient to watershed changes. Reservoir sedimentation rates generally were higher before 1963, but have been much lower and very stable since that time. Our findings suggest that (1) rangeland water yields may be stable over long periods despite dramatic landscape changes while (2) these same landscape changes influence sediment yields that impact downstream reservoir storage. Relying on rangelands to meet water needs demands an understanding of how these dynamic landscapes function and a quantification of the physical processes at work.
Reviewing and piloting methods for decreasing discount rates; someone, somewhere in time.
Parouty, Mehraj B Y; Krooshof, Daan G M; Westra, Tjalke A; Pechlivanoglou, Petros; Postma, Maarten J
2013-08-01
There has been substantial debate on the need for decreasing discounting for monetary and health gains in economic evaluations. Next to the discussion on differential discounting, a way to identify the need for such discounting strategies is through eliciting the time preferences for monetary and health outcomes. In this article, the authors investigate the perceived time preference for money and health gains through a pilot survey on Dutch university students using methods on functional forms previously suggested. Formal objectives of the study were to review such existing methods and to pilot them on a convenience sample using a questionnaire designed for this specific purpose. Indeed, a negative relation between the time of delay and the variance of the discounting rate for all models was observed. This study was intended as a pilot for a large-scale population-based investigation using the findings from this pilot on wording of the questionnaire, interpretation, scope and analytic framework.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant.
Moreno-Garcia, Isabel M; Palacios-Garcia, Emilio J; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J; Varo-Martinez, Marta; Real-Calvo, Rafael J
2016-05-26
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.
Delineating environmental control of phytoplankton biomass and phenology in the Southern Ocean
NASA Astrophysics Data System (ADS)
Ardyna, Mathieu; Claustre, Hervé; Sallée, Jean-Baptiste; D'Ovidio, Francesco; Gentili, Bernard; van Dijken, Gert; D'Ortenzio, Fabrizio; Arrigo, Kevin R.
2017-05-01
The Southern Ocean (SO), an area highly sensitive to climate change, is currently experiencing rapid warming and freshening. Such drastic physical changes might significantly alter the SO's biological pump. For more accurate predictions of the possible evolution of this pump, a better understanding of the environmental factors controlling SO phytoplankton dynamics is needed. Here we present a satellite-based study deciphering the complex environmental control of phytoplankton biomass (PB) and phenology (PH; timing and magnitude of phytoplankton blooms) in the SO. We reveal that PH and PB are mostly organized in the SO at two scales: a large latitudinal scale and a regional scale. Latitudinally, a clear gradient in the timing of bloom occurrence appears tightly linked to the seasonal cycle in irradiance, with some exceptions in specific light-limited regimes (i.e., well-mixed areas). Superimposed on this latitudinal scale, zonal asymmetries, up to 3 orders of magnitude, in regional-scale PB are mainly driven by local advective and iron supply processes. These findings provide a global understanding of PB and PH in the SO, which is of fundamental interest for identifying and explaining ongoing changes as well as predicting future changes in the SO biological pump.
Dust Destruction in the ISM: A Re-Evaluation of Dust Lifetimes
NASA Technical Reports Server (NTRS)
Jones, A. P.; Nuth, J. A., III
2011-01-01
There is a long-standing conundrum in interstellar dust studies relating to the discrepancy between the time-scales for dust formation from evolved stars and the apparently more rapid destruction in supernova-generated shock waves. Aims. We re-examine some of the key issues relating to dust evolution and processing in the interstellar medium. Methods. We use recent and new constraints from observations, experiments, modelling and theory to re-evaluate dust formation in the interstellar medium (ISM). Results. We find that the discrepancy between the dust formation and destruction time-scales may not be as significant as has previously been assumed because of the very large uncertainties involved. Conclusions. The derived silicate dust lifetime could be compatible with its injection time-scale, given the inherent uncertainties in the dust lifetime calculation. The apparent need to re-form significant quantities of silicate dust in the tenuous interstellar medium may therefore not be a strong requirement. Carbonaceous matter, on the other hand, appears to be rapidly recycled in the ISM and, in contrast to silicates, there are viable mechanisms for its re-formation in the ISM.
Short and long-term energy intake patterns and their implications for human body weight regulation.
Chow, Carson C; Hall, Kevin D
2014-07-01
Adults consume millions of kilocalories over the course of a few years, but the typical weight gain amounts to only a few thousand kilocalories of stored energy. Furthermore, food intake is highly variable from day to day and yet body weight is remarkably stable. These facts have been used as evidence to support the hypothesis that human body weight is regulated by active control of food intake operating on both short and long time scales. Here, we demonstrate that active control of human food intake on short time scales is not required for body weight stability and that the current evidence for long term control of food intake is equivocal. To provide more data on this issue, we emphasize the urgent need for developing new methods for accurately measuring energy intake changes over long time scales. We propose that repeated body weight measurements can be used along with mathematical modeling to calculate long-term changes in energy intake and thereby quantify adherence to a diet intervention and provide dynamic feedback to individuals that seek to control their body weight. Published by Elsevier Inc.
Towards a Millennial Time-scale Vertical Deformation Field in Taiwan
NASA Astrophysics Data System (ADS)
Bordovaos, P. A.; Johnson, K. M.
2015-12-01
Pete Bordovalos and Kaj M. Johnson To better understand the feedbacks between erosion and deformation in Taiwan, we need constraints on the millennial time-scale vertical field. Dense GPS and leveling data sets in Taiwan provide measurements of the present-day vertical deformation field over the entire Taiwan island. However, it is unclear how much of this vertical field is transient (varies over earthquake cycle) or steady (over millennial time scale). A deformation model is required to decouple transient from steady deformation. This study takes a look at how the 82 mm/yr of convergence motion between the Eurasian plate and the Philippine Sea plate is distributed across the faults on Taiwan. We build a plate flexure model that consists of all known active faults and subduction zones cutting through an elastic plate supported by buoyancy. We use horizontal and vertical GPS data, leveling data, and geologic surface uplift rates with a Monte Carlo probabilistic inversion method to infer fault slip rates and locking depths on all faults. Using our model we examine how different fault geometries influence the estimates of distribution of slip along faults and deformation patterns.
The Gaussian copula model for the joint deficit index for droughts
NASA Astrophysics Data System (ADS)
Van de Vyver, H.; Van den Bergh, J.
2018-06-01
The characterization of droughts and their impacts is very dependent on the time scale that is involved. In order to obtain an overall drought assessment, the cumulative effects of water deficits over different times need to be examined together. For example, the recently developed joint deficit index (JDI) is based on multivariate probabilities of precipitation over various time scales from 1- to 12-months, and was constructed from empirical copulas. In this paper, we examine the Gaussian copula model for the JDI. We model the covariance across the temporal scales with a two-parameter function that is commonly used in the specific context of spatial statistics or geostatistics. The validity of the covariance models is demonstrated with long-term precipitation series. Bootstrap experiments indicate that the Gaussian copula model has advantages over the empirical copula method in the context of drought severity assessment: (i) it is able to quantify droughts outside the range of the empirical copula, (ii) provides adequate drought quantification, and (iii) provides a better understanding of the uncertainty in the estimation.
Short and long-term energy intake patterns and their implications for human body weight regulation
Chow, Carson C.; Hall, Kevin D.
2014-01-01
Adults consume millions of kilocalories over the course of a few years, but the typical weight gain amounts to only a few thousand kilocalories of stored energy. Furthermore, food intake is highly variable from day to day and yet body weight is remarkably stable. These facts have been used as evidence to support the hypothesis that human body weight is regulated by active control of food intake operating on both short and long time scales. Here, we demonstrate that active control of human food intake on short time scales is not required for body weight stability and that the current evidence for long term control of food intake is equivocal. To provide more data on this issue, we emphasize the urgent need for developing new methods for accurately measuring energy intake changes over long time scales. We propose that repeated body weight measurements can be used along with mathematical modeling to calculate long-term changes in energy intake and thereby quantify adherence to a diet intervention and provide dynamic feedback to individuals that seek to control their body weight. PMID:24582679
How life changes itself: the Read-Write (RW) genome.
Shapiro, James A
2013-09-01
The genome has traditionally been treated as a Read-Only Memory (ROM) subject to change by copying errors and accidents. In this review, I propose that we need to change that perspective and understand the genome as an intricately formatted Read-Write (RW) data storage system constantly subject to cellular modifications and inscriptions. Cells operate under changing conditions and are continually modifying themselves by genome inscriptions. These inscriptions occur over three distinct time-scales (cell reproduction, multicellular development and evolutionary change) and involve a variety of different processes at each time scale (forming nucleoprotein complexes, epigenetic formatting and changes in DNA sequence structure). Research dating back to the 1930s has shown that genetic change is the result of cell-mediated processes, not simply accidents or damage to the DNA. This cell-active view of genome change applies to all scales of DNA sequence variation, from point mutations to large-scale genome rearrangements and whole genome duplications (WGDs). This conceptual change to active cell inscriptions controlling RW genome functions has profound implications for all areas of the life sciences. © 2013 Elsevier B.V. All rights reserved.
Ortega, Javier; García-Rayo, Ramón; Resines, Carlos
2009-01-01
This article presents a simple technique for fascia lata lengthening that is less aggressive, can be performed under local anaesthetic with little morbidity and disability, and has excellent results. Eleven patients (13 hips) were enrolled in this study. Mean age was 54.6 years, there was one man and ten women. Outcomes were assessed by using a visual analog pain scale, Harris hip score and Lickert scale (satisfaction). There was a mean follow-up time of 43 months (range 15–84). All patients were scored by the Harris hip scale with a mean improvement from 61 (range 48–77) to 91 (range 76–95) after surgery. The mean visual analogue scale (VAS) score improved from 83 (range 60–99) to 13 (range 0–70). We had 12 of 13 patients reporting a good result. Mean surgical time was 15 min, and only one seroma was reported as a complication. No inpatient management was needed. In conclusion, distal “Z” lengthening of the fascia lata appears to be a good alternative for treatment of this condition. PMID:19214507
ColorMoves: Optimizing Color's Potential for Exploration and Communication of Data
NASA Astrophysics Data System (ADS)
Samsel, F.
2017-12-01
Color is the most powerful perceptual channel available for exposing and communicating data. Most visualizations are rendered in one of a handful of common colormaps - the rainbow, cool-warm, heat map and viridis. These maps meet the basic criteria for encoding data - perceptual uniformity and reasonable discriminatory power. However, as the size and complexity of data grows, our need to optimize the potential of color grows. The ability to expose greater detail and differentiate between multiple variables becomes ever more important. To meet this need we have created ColorMoves, an interactive colormap construction tool that enables scientists to quickly and easily align a concentration contrast with the data ranges of interest. Perceptual research tells us that luminance is the strongest contrast and thus provides the highest degree of perceptual discrimination. However, the most commonly used colormaps contain a limited range of luminance contrast. ColorMoves enables interactive constructing colormaps enabling one to distribute the luminance where is it most needed. The interactive interface enables optimal placement of the color scales. The ability to watch the changes on ones data, in real time makes precision adjustment quick and easy. By enabling more precise placement and multiple ranges of luminance one can construct colomaps containing greater discriminatory power. By selecting from the wide range of color scale hues scientists can create colormaps intuitive to their subject. ColorMoves is comprised of four main components: a set of 40 color scales; a histogram of the data distribution; a viewing area showing the colormap on your data; and the controls section. The 40 color scales span the spectrum of hues, saturation levels and value distributions. The histogram of the data distribution enables placement of the color scales in precise locations. The viewing area show is the impact of changes on the data in real time. The controls section enables export of the constructed colormaps for use in tools such as ParaView and Matplotlib. For a clearer understanding of ColorMoves capability we recommend trying it out at SciVisColor.org.
Towards large-scale plasma-assisted synthesis of nanowires
NASA Astrophysics Data System (ADS)
Cvelbar, U.
2011-05-01
Large quantities of nanomaterials, e.g. nanowires (NWs), are needed to overcome the high market price of nanomaterials and make nanotechnology widely available for general public use and applications to numerous devices. Therefore, there is an enormous need for new methods or routes for synthesis of those nanostructures. Here plasma technologies for synthesis of NWs, nanotubes, nanoparticles or other nanostructures might play a key role in the near future. This paper presents a three-dimensional problem of large-scale synthesis connected with the time, quantity and quality of nanostructures. Herein, four different plasma methods for NW synthesis are presented in contrast to other methods, e.g. thermal processes, chemical vapour deposition or wet chemical processes. The pros and cons are discussed in detail for the case of two metal oxides: iron oxide and zinc oxide NWs, which are important for many applications.
Gülgöz, S
2001-01-01
The cross-cultural validity of the Need for Cognition Scale and its relationship with cognitive performance were investigated in two studies. In the first study, the relationships between the scale and university entrance scores, course grades, study skills, and social desirability were examined. Using the short form of the Turkish version of the Need for Cognition Scale (S. Gülöz & C. J. Sadowski, 1995) no correlation with academic performance was found but there was significant correlation with a study skills scale and a social desirability scale created for this study. When regression analysis was used to predict grade point average, the Need for Cognition Scale was a significant predictor. In the second study, participants low or high in need for cognition solved multiple-solution anagrams. The instructions preceding the task set the participants' expectations regarding task difficulty. An interaction between expectation and need for cognition indicated that participants with low need for cognition performed worse when they expected difficult problems. Results of the two studies showed that need for cognition has cross-cultural validity and that its effect on cognitive performance was mediated by other variables.
A perspective on sustained marine observations for climate modelling and prediction
Dunstone, Nick J.
2014-01-01
Here, I examine some of the many varied ways in which sustained global ocean observations are used in numerical modelling activities. In particular, I focus on the use of ocean observations to initialize predictions in ocean and climate models. Examples are also shown of how models can be used to assess the impact of both current ocean observations and to simulate that of potential new ocean observing platforms. The ocean has never been better observed than it is today and similarly ocean models have never been as capable at representing the real ocean as they are now. However, there remain important unanswered questions that can likely only be addressed via future improvements in ocean observations. In particular, ocean observing systems need to respond to the needs of the burgeoning field of near-term climate predictions. Although new ocean observing platforms promise exciting new discoveries, there is a delicate balance to be made between their funding and that of the current ocean observing system. Here, I identify the need to secure long-term funding for ocean observing platforms as they mature, from a mainly research exercise to an operational system for sustained observation over climate change time scales. At the same time, considerable progress continues to be made via ship-based observing campaigns and I highlight some that are dedicated to addressing uncertainties in key ocean model parametrizations. The use of ocean observations to understand the prominent long time scale changes observed in the North Atlantic is another focus of this paper. The exciting first decade of monitoring of the Atlantic meridional overturning circulation by the RAPID-MOCHA array is highlighted. The use of ocean and climate models as tools to further probe the drivers of variability seen in such time series is another exciting development. I also discuss the need for a concerted combined effort from climate models and ocean observations in order to understand the current slow-down in surface global warming. PMID:25157195
“Superluminal” FITS File Processing on Multiprocessors: Zero Time Endian Conversion Technique
NASA Astrophysics Data System (ADS)
Eguchi, Satoshi
2013-05-01
The FITS is the standard file format in astronomy, and it has been extended to meet the astronomical needs of the day. However, astronomical datasets have been inflating year by year. In the case of the ALMA telescope, a ˜TB-scale four-dimensional data cube may be produced for one target. Considering that typical Internet bandwidth is tens of MB/s at most, the original data cubes in FITS format are hosted on a VO server, and the region which a user is interested in should be cut out and transferred to the user (Eguchi et al. 2012). The system will equip a very high-speed disk array to process a TB-scale data cube in 10 s, and disk I/O speed, endian conversion, and data processing speeds will be comparable. Hence, reducing the endian conversion time is one of issues to solve in our system. In this article, I introduce a technique named “just-in-time endian conversion”, which delays the endian conversion for each pixel just before it is really needed, to sweep out the endian conversion time; by applying this method, the FITS processing speed increases 20% for single threading and 40% for multi-threading compared to CFITSIO. The speedup tightly relates to modern CPU architecture to improve the efficiency of instruction pipelines due to break of “causality”, a programmed instruction code sequence.
As a Matter of Force—Systematic Biases in Idealized Turbulence Simulations
NASA Astrophysics Data System (ADS)
Grete, Philipp; O’Shea, Brian W.; Beckwith, Kris
2018-05-01
Many astrophysical systems encompass very large dynamical ranges in space and time, which are not accessible by direct numerical simulations. Thus, idealized subvolumes are often used to study small-scale effects including the dynamics of turbulence. These turbulent boxes require an artificial driving in order to mimic energy injection from large-scale processes. In this Letter, we show and quantify how the autocorrelation time of the driving and its normalization systematically change the properties of an isothermal compressible magnetohydrodynamic flow in the sub- and supersonic regime and affect astrophysical observations such as Faraday rotation. For example, we find that δ-in-time forcing with a constant energy injection leads to a steeper slope in kinetic energy spectrum and less-efficient small-scale dynamo action. In general, we show that shorter autocorrelation times require more power in the acceleration field, which results in more power in compressive modes that weaken the anticorrelation between density and magnetic field strength. Thus, derived observables, such as the line-of-sight (LOS) magnetic field from rotation measures, are systematically biased by the driving mechanism. We argue that δ-in-time forcing is unrealistic and numerically unresolved, and conclude that special care needs to be taken in interpreting observational results based on the use of idealized simulations.
A High-Resolution Record of Holocene Climate Variability from a Western Canadian Coastal Inlet
NASA Astrophysics Data System (ADS)
Dallimore, A.; Thomson, R. E.; Enkin, R. J.; Kulikov, E. A.; Bertram, M. A.; Wright, C. A.; Southon, J. R.; Barrie, J. V.; Baker, J.; Pienitz, R.; Calvert, S. E.; Chang, A. S.; Pedersen, T. F.
2004-12-01
Conditions within the Pacific Ocean have a major effect on the climate of northwestern North America. High resolution records of present and past northeast Pacific climate are revealed in our multi-disciplinary study of annually laminated marine sediments from anoxic coastal inlets of British Columbia. Past climate conditions for the entire Holocene are recorded in the sediment record contained in a 40 meter, annually laminated marine sediment core taken in Effingham Inlet, on the west coast of Vancouver Island, British Columbia, from the French ship the Marion Dufresne, as part of the international IMAGES program. By combining our eight year continuous instrument record of modern coastal ocean dynamics and climate with high-resolution analysis of depositional processes, we have been able to develop proxy measurements of past climatic and oceanographic changes on annual to millennial time scales. Results indicate that regional climate has oscillated on a variety of time scales throughout the Holocene. At times, climatic change has been dramatically rapid. We are also developing digital methods for statistical time-series analyses of physical sediment properties through the Holocene in order to obtain a more objective quantitative approach for detecting cyclicity in our data. Results of the time series analysis of lamination thickness reveals statistically significant spectral peaks of climate scale variability at established decadal to century time scales. These in turn may be related to solar cycles and quasi-cyclical ocean processes such as the Pacific Decadal Oscillation. However, the annually laminated time series are periodically interrupted by massive mud intervals which are related to bottom currents and at times paleo-seismic events, illustrating the need for a full understanding of modern oceanographic and sedimentation processes, so an accurate proxy record of past climate can be established.
Nishiguchi, Yuki; Takano, Keisuke; Tanno, Yoshihiko
2016-07-30
Previous studies have shown a negative correlation between effortful control (EC) and depressive symptoms. EC is defined as the efficiency of executive attention, which may be reduced by the attentional impairment associated with depression. However, the mechanism underlying this correlation is still unclear. We investigated the relationship between EC and depressive symptoms with the hypothesis that cognitive motivation, or need for cognition (NfC), is a possible mediator of this relationship. Participants were 178 Japanese university students. Each completed the Zung Self-Rating Depression Scale, Effortful Control Scale, and Need for Cognition Scale at baseline and follow-up assessments. Supporting our hypothesis, mediation analyses revealed a significant indirect effect of depressive symptoms on EC that was mediated by NfC. In addition, our data demonstrated a direct effect of depressive symptoms on EC. Longitudinal analysis indicated that an increase in depression and a decrease in NfC occurred synchronously, while NfC predicted an increase in EC over time. Depressive symptoms may decrease executive functioning and effortful control both directly and indirectly, the latter effect being mediated by motivation. These findings imply that a motivational deficit may partially explain the decreased EC found in people suffering from depression. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A CRITICAL ASSESSMENT OF BIODOSIMETRY METHODS FOR LARGE-SCALE INCIDENTS
Swartz, Harold M.; Flood, Ann Barry; Gougelet, Robert M.; Rea, Michael E.; Nicolalde, Roberto J.; Williams, Benjamin B.
2014-01-01
Recognition is growing regarding the possibility that terrorism or large-scale accidents could result in potential radiation exposure of hundreds of thousands of people and that the present guidelines for evaluation after such an event are seriously deficient. Therefore, there is a great and urgent need for after-the-fact biodosimetric methods to estimate radiation dose. To accomplish this goal, the dose estimates must be at the individual level, timely, accurate, and plausibly obtained in large-scale disasters. This paper evaluates current biodosimetry methods, focusing on their strengths and weaknesses in estimating human radiation exposure in large-scale disasters at three stages. First, the authors evaluate biodosimetry’s ability to determine which individuals did not receive a significant exposure so they can be removed from the acute response system. Second, biodosimetry’s capacity to classify those initially assessed as needing further evaluation into treatment-level categories is assessed. Third, we review biodosimetry’s ability to guide treatment, both short- and long-term, is reviewed. The authors compare biodosimetric methods that are based on physical vs. biological parameters and evaluate the features of current dosimeters (capacity, speed and ease of getting information, and accuracy) to determine which are most useful in meeting patients’ needs at each of the different stages. Results indicate that the biodosimetry methods differ in their applicability to the three different stages, and that combining physical and biological techniques may sometimes be most effective. In conclusion, biodosimetry techniques have different properties, and knowledge of their properties for meeting the different needs for different stages will result in their most effective use in a nuclear disaster mass-casualty event. PMID:20065671
Bethge, Matthias; Radoschewski, Friedrich Michael; Gutenbrunner, Christoph
2012-11-01
To evaluate the predictive value of the Work Ability Index (WAI) for different indicators of the need for rehabilitation at 1-year follow-up. Cohort study. Data were obtained from the Second German Sociomedical Panel of Employees, a large-scale cohort study with postal surveys in 2009 and 2010. A total of 457 women and 579 men were included. Confirmatory factor analysis confirmed the one-dimensionality of the WAI. Regression analyses showed that poor and moderate baseline WAI scores were associated with lower health-related quality of life and more frequent use of primary healthcare 1 year later. Subjects with poor baseline work ability had 4.6 times higher odds of unemployment and 12.2 times higher odds of prolonged sick leave than the reference group with good or excellent baseline work ability. Moreover, the odds of subjectively perceived need for rehabilitation, intention to request rehabilitation and actual use of rehabilitation services were 9.7, 5.7 and 3 times higher in the poor baseline WAI group and 5.5, 4 and 1.8 times higher in the moderate baseline WAI group, respectively. A WAI score ≤ 37 was identified as the optimal cut-off to predict the need for rehabilitation. The WAI is a valid screening tool for identifying the need for rehabilitation.
Palliative sedation: reliability and validity of sedation scales.
Arevalo, Jimmy J; Brinkkemper, Tijn; van der Heide, Agnes; Rietjens, Judith A; Ribbe, Miel; Deliens, Luc; Loer, Stephan A; Zuurmond, Wouter W A; Perez, Roberto S G M
2012-11-01
Observer-based sedation scales have been used to provide a measurable estimate of the comfort of nonalert patients in palliative sedation. However, their usefulness and appropriateness in this setting has not been demonstrated. To study the reliability and validity of observer-based sedation scales in palliative sedation. A prospective evaluation of 54 patients under intermittent or continuous sedation with four sedation scales was performed by 52 nurses. Included scales were the Minnesota Sedation Assessment Tool (MSAT), Richmond Agitation-Sedation Scale (RASS), Vancouver Interaction and Calmness Scale (VICS), and a sedation score proposed in the Guideline for Palliative Sedation of the Royal Dutch Medical Association (KNMG). Inter-rater reliability was tested with the intraclass correlation coefficient (ICC) and Cohen's kappa coefficient. Correlations between the scales using Spearman's rho tested concurrent validity. We also examined construct, discriminative, and evaluative validity. In addition, nurses completed a user-friendliness survey. Overall moderate to high inter-rater reliability was found for the VICS interaction subscale (ICC = 0.85), RASS (ICC = 0.73), and KNMG (ICC = 0.71). The largest correlation between scales was found for the RASS and KNMG (rho = 0.836). All scales showed discriminative and evaluative validity, except for the MSAT motor subscale and VICS calmness subscale. Finally, the RASS was less time consuming, clearer, and easier to use than the MSAT and VICS. The RASS and KNMG scales stand as the most reliable and valid among the evaluated scales. In addition, the RASS was less time consuming, clearer, and easier to use than the MSAT and VICS. Further research is needed to evaluate the impact of the scales on better symptom control and patient comfort. Copyright © 2012 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Fish scale artefact on an intraoral imaging receptor.
Buchanan, Allison; Morales, Carla; Looney, Stephen; Kalathingal, Sajitha
2017-12-01
To describe an artefact, termed the fish scale artefact, present on an intraoral imaging receptor. Thirty brand new DIGORA Optime photostimulable phosphor (PSP) plates (Soredex/Orion Corp., Helsinki, Finland) were imaged using the dental digital quality assurance radiographic phantom (Dental Imaging Consultants LLC, San Antonio, TX). All PSP plates were scanned at the same spatial resolution (dpi) using the high resolution mode. Two evaluators assessed all 30 plates. Each evaluator assessed the 30 PSP plates separately for purposes of establishing interrater reliability, and then together in order to obtain the gold standard result. The fish scale artefact was detected on 46.7% of the PSP plates. The kappa coefficient for interrater reliability was 0.86 [95% CI (0.69-1.00)], indicating excellent interrater reliability. For Evaluator 1, sensitivity was 0.85 [95% CI (0.55-0.98)]; specificity was 0.94 [CI (0.71-1.00)] and overall accuracy was 0.90 [95% CI (0.73-0.98)]. For Evaluator 2, sensitivity was 1.00 [95% CI (0.75-1.00)]; specificity was 0.94 [CI (0.71-1.00)] and overall accuracy was 0.97 [95% CI (0.83-1.00)]. These results indicate excellent agreement with the gold standard for both evaluators. Utilizing a comprehensive quality assurance protocol, we identified a fish scale artefact inherent to the image receptor. Additional research is needed to determine if the artefact remains static over time or if it increases over time. Likewise, research to determine the potential sources contributing to an increase in the artefact is needed.
Partner relationship, social support and perinatal distress among pregnant Icelandic women.
Jonsdottir, Sigridur Sia; Thome, Marga; Steingrimsdottir, Thora; Lydsdottir, Linda Bara; Sigurdsson, Jon Fridrik; Olafsdottir, Halldora; Swahnberg, Katarina
2017-02-01
It is inferred that perinatal distress has adverse effects on the prospective mother and the health of the foetus/infant. More knowledge is needed to identify which symptoms of perinatal distress should be assessed during pregnancy and to shed light on the impact of women's satisfaction with their partner relationship on perinatal distress. The current study aimed to generate knowledge about the association of the partner relationship and social support when women are dealing with perinatal distress expressed by symptoms of depression, anxiety and stress. A structured interview was conducted with 562 Icelandic women who were screened three times during pregnancy with the Edinburgh Depression Scale and the Depression, Anxiety, Stress Scale. Of these, 360 had symptoms of distress and 202 belonged to a non-distress group. The women answered the Multidimensional Scale of Perceived Social Support and the Dyadic Adjustment Scale. The study had a multicentre prospective design allowing for exploration of association with perinatal distress. Women who were dissatisfied in their partner relationship were four times more likely to experience perinatal distress. Women with perinatal distress scored highest on the DASS Stress Subscale and the second highest scores were found on the Anxiety Subscale. Satisfaction in partner relationship is related to perinatal distress and needs to be assessed when health care professionals take care of distressed pregnant women, her partner and her family. Assessment of stress and anxiety should be included in the evaluation of perinatal distress, along with symptoms of depression. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Laleian, A.; Valocchi, A. J.; Werth, C. J.
2017-12-01
Multiscale models of reactive transport in porous media are capable of capturing complex pore-scale processes while leveraging the efficiency of continuum-scale models. In particular, porosity changes caused by biofilm development yield complex feedbacks between transport and reaction that are difficult to quantify at the continuum scale. Pore-scale models, needed to accurately resolve these dynamics, are often impractical for applications due to their computational cost. To address this challenge, we are developing a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled with a mortar method providing continuity at interfaces. We explore two decompositions of coupled pore-scale and continuum-scale regions to study biofilm growth in a transverse mixing zone. In the first decomposition, all reaction is confined to a pore-scale region extending the transverse mixing zone length. Only solute transport occurs in the surrounding continuum-scale regions. Relative to a fully pore-scale result, we find the multiscale model with this decomposition has a reduced run time and consistent result in terms of biofilm growth and solute utilization. In the second decomposition, reaction occurs in both an up-gradient pore-scale region and a down-gradient continuum-scale region. To quantify clogging, the continuum-scale model implements empirical relations between porosity and continuum-scale parameters, such as permeability and the transverse dispersion coefficient. Solutes are sufficiently mixed at the end of the pore-scale region, such that the initial reaction rate is accurately computed using averaged concentrations in the continuum-scale region. Relative to a fully pore-scale result, we find accuracy of biomass growth in the multiscale model with this decomposition improves as the interface between pore-scale and continuum-scale regions moves downgradient where transverse mixing is more fully developed. Also, this decomposition poses additional challenges with respect to mortar coupling. We explore these challenges and potential solutions. While recent work has demonstrated growing interest in multiscale models, further development is needed for their application to field-scale subsurface contaminant transport and remediation.
Calibration of decadal ensemble predictions
NASA Astrophysics Data System (ADS)
Pasternack, Alexander; Rust, Henning W.; Bhend, Jonas; Liniger, Mark; Grieger, Jens; Müller, Wolfgang; Ulbrich, Uwe
2017-04-01
Decadal climate predictions are of great socio-economic interest due to the corresponding planning horizons of several political and economic decisions. Due to uncertainties of weather and climate, forecasts (e.g. due to initial condition uncertainty), they are issued in a probabilistic way. One issue frequently observed for probabilistic forecasts is that they tend to be not reliable, i.e. the forecasted probabilities are not consistent with the relative frequency of the associated observed events. Thus, these kind of forecasts need to be re-calibrated. While re-calibration methods for seasonal time scales are available and frequently applied, these methods still have to be adapted for decadal time scales and its characteristic problems like climate trend and lead time dependent bias. Regarding this, we propose a method to re-calibrate decadal ensemble predictions that takes the above mentioned characteristics into account. Finally, this method will be applied and validated to decadal forecasts from the MiKlip system (Germany's initiative for decadal prediction).
Quantifying the Behavior of Stock Correlations Under Market Stress
Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel
2012-01-01
Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242
Dynamical tuning for MPC using population games: A water supply network application.
Barreiro-Gomez, Julian; Ocampo-Martinez, Carlos; Quijano, Nicanor
2017-07-01
Model predictive control (MPC) is a suitable strategy for the control of large-scale systems that have multiple design requirements, e.g., multiple physical and operational constraints. Besides, an MPC controller is able to deal with multiple control objectives considering them within the cost function, which implies to determine a proper prioritization for each of the objectives. Furthermore, when the system has time-varying parameters and/or disturbances, the appropriate prioritization might vary along the time as well. This situation leads to the need of a dynamical tuning methodology. This paper addresses the dynamical tuning issue by using evolutionary game theory. The advantages of the proposed method are highlighted and tested over a large-scale water supply network with periodic time-varying disturbances. Finally, results are analyzed with respect to a multi-objective MPC controller that uses static tuning. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Vegetation colonization of permafrost-related landslides, Ellesmere Island, Canadian High Arctic
NASA Astrophysics Data System (ADS)
Cannone, Nicoletta; Lewkowicz, Antoni G.; Guglielmin, Mauro
2010-12-01
Relationships between vegetation colonization and landslide disturbance are analyzed for 12 active-layer detachments of differing ages located in three areas of the Fosheim Peninsula, Ellesmere Island (80°N). We discuss vegetation as an age index for landslides and a way to assess the time needed for complete recolonization of the surfaces since landslide detachment. Vegetation on undisturbed terrain is similar in the three areas but is more highly developed and complex inland due to a warmer summer climate. On a regional scale, the location of the area is as important as the effect of landslide age on vegetation colonization because of the influence of mesoclimatic conditions on vegetation development. On a landscape scale, there is a positive relationship between landslide age and vegetation development, as represented by total vegetation cover, floristic composition, and successional stage. Consequently, vegetation can be used at this scale as an indicator of landslide age. Fifty years are required to restore vegetation patches to a floristic composition similar to communities occurring in undisturbed conditions, but with lower floristic richness and a discontinuous cover and without well-developed layering. The shorter time needed for landslide recovery in the area with the warmest summer climate confirms the sensitivity of arctic vegetation to small differences in air temperature. This could trigger a set of interlinked feedbacks that would amplify future rates of climate warming.
Massie, Danielle L.; Smith, Geoffrey; Bonvechio, Timothy F.; Bunch, Aaron J.; Lucchesi, David O.; Wagner, Tyler
2018-01-01
Quantifying spatial variability in fish growth and identifying large‐scale drivers of growth are fundamental to many conservation and management decisions. Although fish growth studies often focus on a single population, it is becoming increasingly clear that large‐scale studies are likely needed for addressing transboundary management needs. This is particularly true for species with high recreational value and for those with negative ecological consequences when introduced outside of their native range, such as the Flathead Catfish Pylodictis olivaris. This study quantified growth variability of the Flathead Catfish across a large portion of its contemporary range to determine whether growth differences existed between habitat types (i.e., reservoirs and rivers) and between native and introduced populations. Additionally, we investigated whether growth parameters varied as a function of latitude and time since introduction (for introduced populations). Length‐at‐age data from 26 populations across 11 states in the USA were modeled using a Bayesian hierarchical von Bertalanffy growth model. Population‐specific growth trajectories revealed large variation in Flathead Catfish growth and relatively high uncertainty in growth parameters for some populations. Relatively high uncertainty was also evident when comparing populations and when quantifying large‐scale patterns. Growth parameters (Brody growth coefficient [K] and theoretical maximum average length [L∞]) were not different (based on overlapping 90% credible intervals) between habitat types or between native and introduced populations. For populations within the introduced range of Flathead Catfish, latitude was negatively correlated with K. For native populations, we estimated an 85% probability that L∞ estimates were negatively correlated with latitude. Contrary to predictions, time since introduction was not correlated with growth parameters in introduced populations of Flathead Catfish. Results of this study suggest that Flathead Catfish growth patterns are likely shaped more strongly by finer‐scale processes (e.g., exploitation or prey abundances) as opposed to macro‐scale drivers.
Multiple time-scales and the developmental dynamics of social systems
Flack, Jessica C.
2012-01-01
To build a theory of social complexity, we need to understand how aggregate social properties arise from individual interaction rules. Here, I review a body of work on the developmental dynamics of pigtailed macaque social organization and conflict management that provides insight into the mechanistic causes of multi-scale social systems. In this model system coarse-grained, statistical representations of collective dynamics are more predictive of the future state of the system than the constantly in-flux behavioural patterns at the individual level. The data suggest that individuals can perceive and use these representations for strategical decision-making. As an interaction history accumulates the coarse-grained representations consolidate. This constrains individual behaviour and provides the foundations for new levels of organization. The time-scales on which these representations change impact whether the consolidating higher-levels can be modified by individuals and collectively. The time-scales appear to be a function of the ‘coarseness’ of the representations and the character of the collective dynamics over which they are averages. The data suggest that an advantage of multiple timescales is that they allow social systems to balance tradeoffs between predictability and adaptability. I briefly discuss the implications of these findings for cognition, social niche construction and the evolution of new levels of organization in biological systems. PMID:22641819
Multiple time-scales and the developmental dynamics of social systems.
Flack, Jessica C
2012-07-05
To build a theory of social complexity, we need to understand how aggregate social properties arise from individual interaction rules. Here, I review a body of work on the developmental dynamics of pigtailed macaque social organization and conflict management that provides insight into the mechanistic causes of multi-scale social systems. In this model system coarse-grained, statistical representations of collective dynamics are more predictive of the future state of the system than the constantly in-flux behavioural patterns at the individual level. The data suggest that individuals can perceive and use these representations for strategical decision-making. As an interaction history accumulates the coarse-grained representations consolidate. This constrains individual behaviour and provides the foundations for new levels of organization. The time-scales on which these representations change impact whether the consolidating higher-levels can be modified by individuals and collectively. The time-scales appear to be a function of the 'coarseness' of the representations and the character of the collective dynamics over which they are averages. The data suggest that an advantage of multiple timescales is that they allow social systems to balance tradeoffs between predictability and adaptability. I briefly discuss the implications of these findings for cognition, social niche construction and the evolution of new levels of organization in biological systems.
Stier, Carly D; Chieu, Ivan B; Howell, Lori; Ryan, Stephen E
2017-07-01
This study examined parent-reported change in the functional performance of four school-aged children with wheeled mobility needs who had used a new adaptive seating system for 6 weeks. The collective case study involved four mothers whose children, ages 6-9 years, received a new adaptive seating system for a manual wheelchair or stroller. Mothers completed the Family Impact of Assistive Technology Scale for Adaptive Seating (FIATS-AS) at the time their child received a new seating system, and then after 6 weeks of daily use. Other questionnaires, health records, and semi-structured interviews provided additional data about the seating interventions and their functional effects on individual children and their families. The FIATS-AS detected overall functional gain in one family, and both gains and losses in 2-7 dimensions for all families. Functional status and change scores showed consistency with measures of seating intervention satisfaction, global functional change, and home participation. Interview themes also suggested consistency with change scores, but provided a deeper understanding of important factors that influenced adaptive seating outcomes. This study supports the need to explore further the complexity, temporality and meaningfulness of adaptive seating outcomes in individual children and their families. Implications for Rehabilitation Assistive technology practitioners need to adopt practical measurement strategies that consider the complexity, temporality, and meaningfulness of outcomes to make evidence-informed decisions about how to improve adaptive seating services and interventions. Health measurement scales that measure adaptive seating outcomes for service applications must have adequate levels of reliability and validity, as well as demonstrate responsive to important change over time for individual children and their families. Needs-specific measurement scales provide a promising avenue for understanding functional outcomes for individual children and youth who use adaptive seating systems.
Pridham, K F; Chang, A S; Hansen, M F
1987-08-01
Examination was made of the relationship of mothers' appraisal of the importance of and need for action around infant-related issues to maternal experience (parity and time since birth), use of help, and perceived problem-solving competence. Sixty-two mothers (38 primiparae and 24 multiparae) kept for 90 days post-birth a daily log of issues, rated for importance and for need for action, and of help used. Mothers also reported perceived problem-solving competence on an 11-item scale. Findings indicated tentativeness in ratings of importance and action. Ratings of importance were associated with action ratings, except for temperament issues. Action ratings for baby care and illness issues decreased significantly with time. Otherwise, maternal experience had no effect on ratings. More of the variance in perceived competence than use of help was explained by action and importance ratings.
Building for the future: essential infrastructure for rodent ageing studies.
Wells, Sara E; Bellantuono, Ilaria
2016-08-01
When planning ageing research using rodent models, the logistics of supply, long term housing and infrastructure provision are important factors to take into consideration. These issues need to be prioritised to ensure they meet the requirements of experiments which potentially will not be completed for several years. Although these issues are not unique to this discipline, the longevity of experiments and indeed the animals, requires a high level of consistency and sustainability to be maintained throughout lengthy periods of time. Moreover, the need to access aged stock or material for more immediate experiments poses many issues for the completion of pilot studies and/or short term intervention studies on older models. In this article, we highlight the increasing demand for ageing research, the resources and infrastructure involved, and the need for large-scale collaborative programmes to advance studies in both a timely and a cost-effective way.
Wilson, Robyn S.; Hardisty, David J.; Epanchin-Niell, Rebecca S.; Runge, Michael C.; Cottingham, Kathryn L.; Urban, Dean L.; Maguire, Lynn A.; Hastings, Alan; Mumby, Peter J.; Peters, Debra P.C.
2016-01-01
Ecological systems often operate on time scales significantly longer or shorter than the time scales typical of human decision making, which causes substantial difficulty for conservation and management in socioecological systems. For example, invasive species may move faster than humans can diagnose problems and initiate solutions, and climate systems may exhibit long-term inertia and short-term fluctuations that obscure learning about the efficacy of management efforts in many ecological systems. We adopted a management-decision framework that distinguishes decision makers within public institutions from individual actors within the social system, calls attention to the ways socioecological systems respond to decision makers’ actions, and notes institutional learning that accrues from observing these responses. We used this framework, along with insights from bedeviling conservation problems, to create a typology that identifies problematic time-scale mismatches occurring between individual decision makers in public institutions and between individual actors in the social or ecological system. We also considered solutions that involve modifying human perception and behavior at the individual level as a means of resolving these problematic mismatches. The potential solutions are derived from the behavioral economics and psychology literature on temporal challenges in decision making, such as the human tendency to discount future outcomes at irrationally high rates. These solutions range from framing environmental decisions to enhance the salience of long-term consequences, to using structured decision processes that make time scales of actions and consequences more explicit, to structural solutions aimed at altering the consequences of short-sighted behavior to make it less appealing. Additional application of these tools and long-term evaluation measures that assess not just behavioral changes but also associated changes in ecological systems are needed.
Wilson, Robyn S; Hardisty, David J; Epanchin-Niell, Rebecca S; Runge, Michael C; Cottingham, Kathryn L; Urban, Dean L; Maguire, Lynn A; Hastings, Alan; Mumby, Peter J; Peters, Debra P C
2016-02-01
Ecological systems often operate on time scales significantly longer or shorter than the time scales typical of human decision making, which causes substantial difficulty for conservation and management in socioecological systems. For example, invasive species may move faster than humans can diagnose problems and initiate solutions, and climate systems may exhibit long-term inertia and short-term fluctuations that obscure learning about the efficacy of management efforts in many ecological systems. We adopted a management-decision framework that distinguishes decision makers within public institutions from individual actors within the social system, calls attention to the ways socioecological systems respond to decision makers' actions, and notes institutional learning that accrues from observing these responses. We used this framework, along with insights from bedeviling conservation problems, to create a typology that identifies problematic time-scale mismatches occurring between individual decision makers in public institutions and between individual actors in the social or ecological system. We also considered solutions that involve modifying human perception and behavior at the individual level as a means of resolving these problematic mismatches. The potential solutions are derived from the behavioral economics and psychology literature on temporal challenges in decision making, such as the human tendency to discount future outcomes at irrationally high rates. These solutions range from framing environmental decisions to enhance the salience of long-term consequences, to using structured decision processes that make time scales of actions and consequences more explicit, to structural solutions aimed at altering the consequences of short-sighted behavior to make it less appealing. Additional application of these tools and long-term evaluation measures that assess not just behavioral changes but also associated changes in ecological systems are needed. © 2015 Society for Conservation Biology.
van Walsem, Marleen R; Howe, Emilie I; Iversen, Kristin; Frich, Jan C; Andelic, Nada
2015-09-28
In order to plan and improve provision of comprehensive care in Huntington's disease (HD), it is critical to understand the gaps in healthcare and social support services provided to HD patients. Research has described utilization of healthcare services in HD in Europe, however, studies systematically examining needs for healthcare services and social support are lacking. This study aims to identify the level and type of met and unmet needs for health and social care services among patients with HD, and explore associated clinical and socio-demographic factors. Eighty-six patients with a clinical diagnosis of HD living in the South-Eastern region of Norway were recruited. Socio-demographic and clinical characteristics were collected. The Needs and Provision Complexity Scale (NPCS) was used to assess the patients' needs for healthcare and social services. Functional ability and disease stage was assessed using the UHDRS Functional assessment scales. In order to investigate factors determining the level of total unmet needs and the level of unmet needs for Health and personal care and Social care and support services, multivariate logistic regression models were used. A high level of unmet needs for health and personal care and social support services were found across all five disease stages, but most marked in disease stage III. The middle phase (disease stage III) and advanced phase (disease stages IV and V) of HD increased odds of having a high level of total unmet needs by 3.5 times and 1.4 times respectively, compared with the early phase (disease stages I and II). Similar results were found for level of unmet needs in the domain Health and personal care. Higher education tended to decrease odds of high level of unmet needs in this domain (OR = 0.48) and increase odds of higher level of unmet needs in the domain of Social care and support (OR = 1.3). Patients reporting needs on their own tended to decrease odds of having unmet needs in Health and personal care (OR = 0.57). Needs for healthcare and social services in patients with HD should be assessed in a systematic manner, in order to provide adequate comprehensive care during the course of disease.
Supportive care needs in Hong Kong Chinese women confronting advanced breast cancer.
Au, Angel; Lam, Wendy; Tsang, Janice; Yau, Tsz-kok; Soong, Inda; Yeo, Winnie; Suen, Joyce; Ho, Wing M; Wong, Ka-yan; Kwong, Ava; Suen, Dacita; Sze, Wing-Kin; Ng, Alice; Girgis, Afaf; Fielding, Richard
2013-05-01
Women with advanced breast cancer (ABC) are living longer, so understanding their needs becomes important. This cross-sectional study investigated the type and extent of unmet supportive care needs in Hong Kong Chinese women with advanced breast cancer. Face-to-face interviews were conducted among women with stage III or stage IV disease mostly awaiting chemotherapy (76%) to identify unmet needs using the Supportive Care Needs Survey Short Form, psychological morbidity using the Hospital Anxiety and Depression Scale, symptom distress using the Memorial Symptom Assessment Scale, and satisfaction with care using the Patient satisfaction questionnaire (PSQ-9). About 27-72% of 198/220 (90%) women (mean age = 53.4 ± 9.74 (standard deviation) years) identified needs from the health system, information, and patient support (HSIPS) domain as the top 15 most prevalent unmet needs. 'having one member of hospital staff with whom you can talk to about all aspect of your condition, treatment, and follow-up' was most cited by 72% of the patients, with remaining unmet needs addressing mostly desire for information. Unmet need strength did not differ between women with stage III and stage IV disease, whereas women with first time diagnosis reported greater health system and information unmet needs compared with women with recurrent disease. Stepwise multiple regression analyses revealed that symptom distress was consistently positively associated with all but sexuality need domains, whereas low satisfaction with care was associated with HSIPS (β = 3.270, p < 0.001) and physical and daily living (β = 2.810, p < 0.01) domains. Chinese women with ABC expressed need for continuity of care and improved information provision. High symptom distress was associated with lower levels of satisfaction with care. These unmet needs appear to reflect current care services shortcomings. Copyright © 2012 John Wiley & Sons, Ltd.
UTC Dissemination to the Real-Time User
NASA Technical Reports Server (NTRS)
Levine, Judah
1996-01-01
The current definition of Coordinated Universal Time (UTC) dates from 1972. The duration of a UTC second is defined in terms of the frequency of a hyperfine transition in the ground state of cesium. This standard frequency is realized in a number of different laboratories using ensembles of commercial cesium clocks and a few primary frequency standards. The data from all of these devices are transmitted periodically to the Bureau International des Poids et Mesures (BIPM) in Sevres, France, where they are combined in a statistical procedure to produce International Atomic Time (TAI). The time of this scale is adjusted as needed ('coordinated') by adding or dropping integer seconds so as to keep it within plus or minus 0.9 s of UT1, a time scale based on the observation of the transit times of stars and corrected for the predicted seasonal variations in these observations. When the leap seconds are included into TAI, the result is called UTC. The difference between TAI and UTC is therefore an exact integer number of seconds. This difference is currently 29 s and will become 30 s at 0 UTC on 1 January 1996.
General constraints on sampling wildlife on FIA plots
Bailey, L.L.; Sauer, J.R.; Nichols, J.D.; Geissler, P.H.; McRoberts, Ronald E.; Reams, Gregory A.; Van Deusen, Paul C.; McWilliams, William H.; Cieszewski, Chris J.
2005-01-01
This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species richness, abundance, and patch occupancy. All methods incorporate two essential sources of variation: detectability estimation and spatial variation. FIA sampling imposes specific space and time criteria that may need to be adjusted to meet local wildlife objectives.
Horridge, Karen A; Mcgarry, Kenneth; Williams, Jane; Whitlingum, Gabriel
2016-06-01
To pilot prospective data collection by paediatricians at the point of care across England using a defined terminology set; demonstrate feasibility of data collection and utility of data outputs; and confirm that counting the number of needs per child is valid for quantifying complexity. Paediatricians in 16 hospital and community settings collected and anonymized data. Participants completed a survey regarding the process. Data were analysed using R version 3.1.2. Overall, 8117 needs captured from 1224 consultations were recorded. Sixteen clinicians responded positively about the process and utility of data collection. The sum of needs varied significantly (p<0.01) by level of gross motor function ascertained using the Gross Motor Function Classification System for children with cerebral palsy; epilepsy severity as defined by level of expertise required to manage it; and by severity of intellectual disability. Prospective data collection at the point of clinical care proved possible without disrupting clinics, even for those with the most complex needs, and took the least time when done electronically. Counting the number of needs was easy to do, and quantified complexity in a way that informed clinical care for individuals and related directly to validated scales of functioning. Data outputs could inform more appropriate design and commissioning of quality services. © 2016 Mac Keith Press.
Dendroecological applications in air pollution and environmental chemistry: research needs
Samuel B. McLaughlin; Walter C. Shortle; Kevin T. Smith
2002-01-01
During the past two decades, dendrochronology has evolved in new dimensions that have helped address both the extent and causes of impacts of regional scale environmental pollution on the productivity and function of forest ecosystems. Initial focus on the magnitude and timing of alterations of baseline growth levels of individual forest trees has now broadened to...
Seeing the trees for the forest: mapping vegetation biodiversity in coastal Oregon forests.
Sally. Duncan
2003-01-01
In order to address policy issues relating to biodiversity, productivity, and sustainability, we need detailed understanding of forest vegetation at broad geographic and time scales. Most existing maps developed from satellite imagery describe only general characteristics of the upper canopy. Detailed vegetation data are available from regional grids of field plots,...
ERIC Educational Resources Information Center
Prieto, Daniel; Aparicio, Gonzalo; Sotelo-Silveira, Jose R.
2017-01-01
Cell and developmental processes are complex, and profoundly dependent on spatial relationships that change over time. Innovative educational or teaching strategies are always needed to foster deep comprehension of these processes and their dynamic features. However, laboratory exercises in cell and developmental biology at the undergraduate level…
Caution Ahead: Overdue Investments for New York's Aging Infrastructure
ERIC Educational Resources Information Center
Forman, Adam
2014-01-01
Following the devastation of Superstorm Sandy in October 2012, New York City's essential infrastructure needs were made a top policy priority for the first time in decades. The scale and severity of the storm prompted numerous studies to assess the damage and led policymakers to take steps to shore up the city's coastal infrastructure weaknesses.…
Time, space, and redwood trees
Leslie M. Reid
1996-01-01
Abstract - Our past concern with details gave us the type of information we needed to manage blocks of redwoods to produce the values we decided were important. But the values that have more recently been recognized as important--species viability, genetic diversity, and so on--cannot be managed on the scale of forest patches, and we must come to understand how...
Program Helps Generate Boundary-Element Mathematical Models
NASA Technical Reports Server (NTRS)
Goldberg, R. K.
1995-01-01
Composite Model Generation-Boundary Element Method (COM-GEN-BEM) computer program significantly reduces time and effort needed to construct boundary-element mathematical models of continuous-fiber composite materials at micro-mechanical (constituent) scale. Generates boundary-element models compatible with BEST-CMS boundary-element code for anlaysis of micromechanics of composite material. Written in PATRAN Command Language (PCL).
2017-01-01
Population demography is central to fundamental ecology and for predicting range shifts, decline of threatened species, and spread of invasive organisms. There is a mismatch between most demographic work, carried out on few populations and at local scales, and the need to predict dynamics at landscape and regional scales. Inspired by concepts from landscape ecology and Markowitz’s portfolio theory, we develop a landscape portfolio platform to quantify and predict the behavior of multiple populations, scaling up the expectation and variance of the dynamics of an ensemble of populations. We illustrate this framework using a 35-y time series on gypsy moth populations. We demonstrate the demography accumulation curve in which the collective growth of the ensemble depends on the number of local populations included, highlighting a minimum but adequate number of populations for both regional-scale persistence and cross-scale inference. The attainable set of landscape portfolios further suggests tools for regional population management for both threatened and invasive species. PMID:29109261
Analysis of Thermal and Reaction Times for Hydrogen Reduction of Lunar Regolith
NASA Technical Reports Server (NTRS)
Hegde, U.; Balasubramaniam, R.; Gokoglu, S.
2008-01-01
System analysis of oxygen production by hydrogen reduction of lunar regolith has shown the importance of the relative time scales for regolith heating and chemical reaction to overall performance. These values determine the sizing and power requirements of the system and also impact the number and operational phasing of reaction chambers. In this paper, a Nusselt number correlation analysis is performed to determine the heat transfer rates and regolith heat up times in a fluidized bed reactor heated by a central heating element (e.g., a resistively heated rod, or a solar concentrator heat pipe). A coupled chemical and transport model has also been developed for the chemical reduction of regolith by a continuous flow of hydrogen. The regolith conversion occurs on the surfaces of and within the regolith particles. Several important quantities are identified as a result of the above analyses. Reactor scale parameters include the void fraction (i.e., the fraction of the reactor volume not occupied by the regolith particles) and the residence time of hydrogen in the reactor. Particle scale quantities include the particle Reynolds number, the Archimedes number, and the time needed for hydrogen to diffuse into the pores of the regolith particles. The analysis is used to determine the heat up and reaction times and its application to NASA s oxygen production system modeling tool is noted.
Analysis of Thermal and Reaction Times for Hydrogen Reduction of Lunar Regolith
NASA Technical Reports Server (NTRS)
Hegde, U.; Balasubramaniam, R.; Gokoglu, S.
2009-01-01
System analysis of oxygen production by hydrogen reduction of lunar regolith has shown the importance of the relative time scales for regolith heating and chemical reaction to overall performance. These values determine the sizing and power requirements of the system and also impact the number and operational phasing of reaction chambers. In this paper, a Nusselt number correlation analysis is performed to determine the heat transfer rates and regolith heat up times in a fluidized bed reactor heated by a central heating element (e.g., a resistively heated rod, or a solar concentrator heat pipe). A coupled chemical and transport model has also been developed for the chemical reduction of regolith by a continuous flow of hydrogen. The regolith conversion occurs on the surfaces of and within the regolith particles. Several important quantities are identified as a result of the above analyses. Reactor scale parameters include the void fraction (i.e., the fraction of the reactor volume not occupied by the regolith particles) and the residence time of hydrogen in the reactor. Particle scale quantities include the particle Reynolds number, the Archimedes number, and the time needed for hydrogen to diffuse into the pores of the regolith particles. The analysis is used to determine the heat up and reaction times and its application to NASA s oxygen production system modeling tool is noted.
NASA Astrophysics Data System (ADS)
Mukherjee, Biswaroop; Peter, Christine; Kremer, Kurt
2017-09-01
Understanding the connections between the characteristic dynamical time scales associated with a coarse-grained (CG) and a detailed representation is central to the applicability of the coarse-graining methods to understand molecular processes. The process of coarse graining leads to an accelerated dynamics, owing to the smoothening of the underlying free-energy landscapes. Often a single time-mapping factor is used to relate the time scales associated with the two representations. We critically examine this idea using a model system ideally suited for this purpose. Single molecular transport properties are studied via molecular dynamics simulations of the CG and atomistic representations of a liquid crystalline, azobenzene containing mesogen, simulated in the smectic and the isotropic phases. The out-of-plane dynamics in the smectic phase occurs via molecular hops from one smectic layer to the next. Hopping can occur via two mechanisms, with and without significant reorientation. The out-of-plane transport can be understood as a superposition of two (one associated with each mode of transport) independent continuous time random walks for which a single time-mapping factor would be rather inadequate. A comparison of the free-energy surfaces, relevant to the out-of-plane transport, qualitatively supports the above observations. Thus, this work underlines the need for building CG models that exhibit both structural and dynamical consistency to the underlying atomistic model.
[From Binet and Wundt to neuropsychological measurements and behavior scales].
Lehmann, H E
1983-01-01
Experimental psychology was the first form of scientific psychology and saw its beginnings in Wundt's laboratory toward the end of the 19th century. Psychometric measures of cognitive functions were introduced, at about the same time, by Binet, while Galton was pioneering in studies of personality profiles. Most of the systematic work, both in the theory and practice of psychology, was focused on standardization of norms, or types and measures of normal mental functions. With the establishment of psychopharmacology as a new discipline with an important role in clinical psychiatry, there emerged an urgent need for scales that indicate the presence and measure the extent and severity of psychopathology. The AMDP scales, created in the context of the European tradition in psychopathology, are some of the most prominent and promising scales of this type.
[ETAP: A smoking scale for Primary Health Care].
González Romero, Pilar María; Cuevas Fernández, Francisco Javier; Marcelino Rodríguez, Itahisa; Rodríguez Pérez, María Del Cristo; Cabrera de León, Antonio; Aguirre-Jaime, Armando
2016-05-01
To obtain a scale of tobacco exposure to address smoking cessation. Follow-up of a cohort. Scale validation. Primary Care Research Unit. Tenerife. A total of 6729 participants from the "CDC de Canarias" cohort. A scale was constructed under the assumption that the time of exposure to tobacco is the key factor to express accumulated risk. Discriminant validity was tested on prevalent cases of acute myocardial infarction (AMI; n=171), and its best cut-off for preventive screening was obtained. Its predictive validity was tested with incident cases of AMI (n=46), comparing the predictive power with markers (age, sex) and classic risk factors of AMI (hypertension, diabetes, dyslipidaemia), including the pack-years index (PYI). The scale obtained was the sum of three times the years that they had smoked plus years exposed to smoking at home and at work. The frequency of AMI increased with the values of the scale, with the value 20 years of exposure being the most appropriate cut-off for preventive action, as it provided adequate predictive values for incident AMI. The scale surpassed PYI in predicting AMI, and competed with the known markers and risk factors. The proposed scale allows a valid measurement of exposure to smoking and provides a useful and simple approach that can help promote a willingness to change, as well as prevention. It still needs to demonstrate its validity, taking as reference other problems associated with smoking. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Postoperative self-efficacy and psychological morbidity in radical prostatectomy1
da Mata, Luciana Regina Ferreira; de Carvalho, Emilia Campos; Gomes, Cássia Regina Gontijo; da Silva, Ana Cristina; Pereira, Maria da Graça
2015-01-01
Objective: evaluate the general and perceived self-efficacy, psychological morbidity, and knowledge about postoperative care of patients submitted to radical prostatectomy. Identify the relationships between the variables and know the predictors of self-efficacy. Method: descriptive, cross-sectional study, conducted with 76 hospitalized men. The scales used were the General and Perceived Self-efficacy Scale and the Hospital Anxiety and Depression Scale, in addition to sociodemographic, clinical and knowledge questionnaires. Results: a negative relationship was found for self-efficacy in relation to anxiety and depression. Psychological morbidity was a significant predictor variable for self-efficacy. An active professional situation and the waiting time for surgery also proved to be relevant variables for anxiety and knowledge, respectively. Conclusion: participants had a good level of general and perceived self-efficacy and small percentage of depression. With these findings, it is possible to produce the profile of patients about their psychological needs after radical prostatectomy and, thus, allow the nursing professionals to act holistically, considering not only the need for care of physical nature, but also of psychosocial nature. PMID:26487129
Morningness/eveningness and the need for sleep.
Taillard, J; Philip, P; Bioulac, B
1999-12-01
The purpose of this study was to determine, in a large sample of adults of all ages (17-80 years), the effect of morningness/eveningness on sleep/wake schedules, sleep needs, sleep hygiene and subjective daytime somnolence. A total of 617 subjects (219 subjects per chronotype group) matched for age, sex and employment status, completed an abridged morningness/eveningness questionnaire, a questionnaire on sleep habits and the quality of sleep, and the Epworth Sleepiness Scale. Eveningness was associated with a greater need for sleep, less time in bed during the week compared to ideal sleep needs, more time in bed at the weekend, a later bedtime and waking-up time especially at the weekend, more irregular sleep/wake habits and greater caffeine consumption. These subjects built up a sleep debt during the week and extended their duration of sleep at the weekend. They did not, however, rate themselves more sleepy than other types, despite the fact that our results showed a clear link between subjectively evaluated daytime somnolence and sleep debt. Why they were less affected by sleep deprivation is not clear. This raises the question of individual susceptibility to the modification of sleep parameters.
Mousseau, Normand; Béland, Laurent Karim; Brommer, Peter; ...
2014-12-24
The properties of materials, even at the atomic level, evolve on macroscopic time scales. Following this evolution through simulation has been a challenge for many years. For lattice-based activated diffusion, kinetic Monte Carlo has turned out to be an almost perfect solution. Various accelerated molecular dynamical schemes, for their part, have allowed the study on long time scale of relatively simple systems. There is still a desire and need, however, for methods able to handle complex materials such as alloys and disordered systems. In this paper, we review the kinetic Activation–Relaxation Technique (k-ART), one of a handful of off-lattice kineticmore » Monte Carlo methods, with on-the-fly cataloging, that have been proposed in the last few years.« less
NASA Technical Reports Server (NTRS)
Molnar, Gyula I.; Susskind, Joel; Iredell, Lena
2011-01-01
In the beginning, a good measure of a GMCs performance was their ability to simulate the observed mean seasonal cycle. That is, a reasonable simulation of the means (i.e., small biases) and standard deviations of TODAY?S climate would suffice. Here, we argue that coupled GCM (CG CM for short) simulations of FUTURE climates should be evaluated in much more detail, both spatially and temporally. Arguably, it is not the bias, but rather the reliability of the model-generated anomaly time-series, even down to the [C]GCM grid-scale, which really matter. This statement is underlined by the social need to address potential REGIONAL climate variability, and climate drifts/changes in a manner suitable for policy decisions.
NASA Astrophysics Data System (ADS)
Bartlett, P. L.; Stelbovics, A. T.; Rescigno, T. N.; McCurdy, C. W.
2007-11-01
Calculations are reported for four-body electron-helium collisions and positron-hydrogen collisions, in the S-wave model, using the time-independent propagating exterior complex scaling (PECS) method. The PECS S-wave calculations for three-body processes in electron-helium collisions compare favourably with previous convergent close-coupling (CCC) and time-dependent exterior complex scaling (ECS) calculations, and exhibit smooth cross section profiles. The PECS four-body double-excitation cross sections are significantly different from CCC calculations and highlight the need for an accurate representation of the resonant helium final-state wave functions when undertaking these calculations. Results are also presented for positron-hydrogen collisions in an S-wave model using an electron-positron potential of V12 = - (8 + (r1 - r2)2)-1/2. This model is representative of the full problem, and the results demonstrate that ECS-based methods can accurately calculate scattering, ionization and positronium formation cross sections in this three-body rearrangement collision.
Davoudiasl, Hooman; Hooper, Dan; McDermott, Samuel D.
2016-01-22
We describe a general scenario, dubbed “Inflatable Dark Matter”, in which the density of dark matter particles can be reduced through a short period of late-time inflation in the early universe. The overproduction of dark matter that is predicted within many otherwise well-motivated models of new physics can be elegantly remedied within this context, without the need to tune underlying parameters or to appeal to anthropic considerations. Thermal relics that would otherwise be disfavored can easily be accommodated within this class of scenarios, including dark matter candidates that are very heavy or very light. Furthermore, the non-thermal abundance of GUTmore » or Planck scale axions can be brought to acceptable levels, without invoking anthropic tuning of initial conditions. Additionally, a period of late-time inflation could have occurred over a wide range of scales from ~ MeV to the weak scale or above, and could have been triggered by physics within a hidden sector, with small but not necessarily negligible couplings to the Standard Model.« less
Selection of Opening Model for Parachute Scaling Studies
1992-03-01
disclosure of contents or reconstruction of the document. REPO T DO UME TATIN PA E f Form Approved REPO T D CUMETATON PGE0MB No. 0704-0188 Public... Values of the Force F, Velocity vL, and 13 Radius, r versus Time t" in Non-Dimensional Form I 7 Calculated and Measured Geometric Shape of the Canopy at...correlating opening time of flat circular parachutes, and gave fair correlation for predicting opening shock for these canopies- but more work needed to
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
Multifractal analysis of geophysical time series in the urban lake of Créteil (France).
NASA Astrophysics Data System (ADS)
Mezemate, Yacine; Tchiguirinskaia, Ioulia; Bonhomme, Celine; Schertzer, Daniel; Lemaire, Bruno Jacques; Vinçon leite, Brigitte; Lovejoy, Shaun
2013-04-01
Urban water bodies take part in the environmental quality of the cities. They regulate heat, contribute to the beauty of landscape and give some space for leisure activities (aquatic sports, swimming). As they are often artificial they are only a few meters deep. It confers them some specific properties. Indeed, they are particularly sensitive to global environmental changes, including climate change, eutrophication and contamination by micro-pollutants due to the urbanization of the watershed. Monitoring their quality has become a major challenge for urban areas. The need for a tool for predicting short-term proliferation of potentially toxic phytoplankton therefore arises. In lakes, the behavior of biological and physical (temperature) fields is mainly driven by the turbulence regime in the water. Turbulence is highly non linear, nonstationary and intermittent. This is why statistical tools are needed to characterize the evolution of the fields. The knowledge of the probability distribution of all the statistical moments of a given field is necessary to fully characterize it. This possibility is offered by the multifractal analysis based on the assumption of scale invariance. To investigate the effect of space-time variability of temperature, chlorophyll and dissolved oxygen on the cyanobacteria proliferation in the urban lake of Creteil (France), a spectral analysis is first performed on each time series (or on subsamples) to have an overall estimate of their scaling behaviors. Then a multifractal analysis (Trace Moment, Double Trace Moment) estimates the statistical moments of different orders. This analysis is adapted to the specific properties of the studied time series, i. e. the presence of large scale gradients. The nonlinear behavior of the scaling functions K(q) confirms that the investigated aquatic time series are indeed multifractal and highly intermittent .The knowledge of the universal multifractal parameters is the key to calculate the different statistical moments and thus make some predictions on the fields. As a conclusion, the relationships between the fields will be highlighted with a discussion on the cross predictability of the different fields. This draws a prospective for the use of this kind of time series analysis in the field of limnology. The authors acknowledge the financial support from the R2DS-PLUMMME and Climate-KIC BlueGreenDream projects.
Validation of the Personal Need for Structure Scale in Chinese.
Shi, Junqi; Wang, Lei; Chen, Yang
2009-08-01
To validate the Chinese version of the Personal Need for Structure Scale, questionnaires were administered to 1,418 individuals in three samples. Item-total correlations and internal consistency of the scale were acceptable. The test-retest reliability was .79. Confirmatory factor analysis indicated that the Chinese version comprised two dimensions, as did the original version; Desire for Structure and Response to Lack of Structure. Correlation coefficients between the Personal Need for Structure Scale and other related measures indicated that the scale has acceptable discriminant validity and convergent validity.
Getting the Big Picture: Design Considerations for a ngVLA Short Spacing Array
NASA Astrophysics Data System (ADS)
Mason, Brian Scott; Cotton, William; Condon, James; Kepley, Amanda; Selina, Rob; Murphy, Eric Joseph
2018-01-01
The Next Generation VLA (ngVLA) aims to provide a revolutionary increase in cm-wavelength collecting area and sensitivity while at the same time providing excellent image fidelity for a broad spectrum of science cases. Likely ngVLA configurations currently envisioned provide sensitivity over a very wide range of spatial scales. The antenna diameter (notionally 18 meters) fundamentally limits the largest angular scales that can be reached. One simple and powerful way to image larger angular scales is to build a complementary interferometer comprising a smaller number of smaller-diameter dishes.We have investigated the requirements that such an array would need to meet in order to usefully scientifically complement the ngVLA; this poster presents the results of our investigation.
Scaling up digital circuit computation with DNA strand displacement cascades.
Qian, Lulu; Winfree, Erik
2011-06-03
To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.
Enhancing water cycle measurements for future hydrologic research
Loescher, H.W.; Jacobs, J.M.; Wendroth, O.; Robinson, D.A.; Poulos, G.S.; McGuire, K.; Reed, P.; Mohanty, B.P.; Shanley, J.B.; Krajewski, W.
2007-01-01
The Consortium of Universities for the Advancement of Hydrologic Sciences, Inc., established the Hydrologic Measurement Facility to transform watershed-scale hydrologic research by facilitating access to advanced instrumentation and expertise that would not otherwise be available to individual investigators. We outline a committee-based process that determined which suites of instrumentation best fit the needs of the hydrological science community and a proposed mechanism for the governance and distribution of these sensors. Here, we also focus on how these proposed suites of instrumentation can be used to address key scientific challenges, including scaling water cycle science in time and space, broadening the scope of individual subdisciplines of water cycle science, and developing mechanistic linkages among these subdisciplines and spatio-temporal scales. ?? 2007 American Meteorological Society.
Migration of the Cratering Flow-Field Center with Implications for Scaling Oblique Impacts
NASA Technical Reports Server (NTRS)
Anderson, J. L. B.; Schultz, P. H.; Heineck, J. T.
2004-01-01
Crater-scaling relationships are used to predict many cratering phenomena such as final crater diameter and ejection speeds. Such nondimensional relationships are commonly determined from experimental impact and explosion data. Almost without exception, these crater-scaling relationships have used data from vertical impacts (90 deg. to the horizontal). The majority of impact craters, however, form by impacts at angles near 45 deg. to the horizontal. While even low impact angles result in relatively circular craters in sand targets, the effects of impact angle have been shown to extend well into the excavation stage of crater growth. Thus, the scaling of oblique impacts needs to be investigated more thoroughly in order to quantify fully how impact angle affects ejection speed and angle. In this study, ejection parameters from vertical (90 deg.) and 30 deg. oblique impacts are measured using three-dimensional particle image velocimetry (3D PIV) at the NASA Ames Vertical Gun Range (AVGR). The primary goal is to determine the horizontal migration of the cratering flow-field center (FFC). The location of the FFC at the time of ejection controls the scaling of oblique impacts. For vertical impacts the FFC coincides with the impact point (IP) and the crater center (CC). Oblique impacts reflect a more complex, horizontally migrating flow-field. A single, stationary point-source model cannot be used accurately to describe the evolution of the ejection angles from oblique impacts. The ejection speeds for oblique impacts also do not follow standard scaling relationships. The migration of the FFC needs to be understood and incorporated into any revised scaling relationships.
A first determination of the unpolarized quark TMDs from a global analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacchetta, Alessandro; Delcarro, Filippo; Pisano, Cristian
Transverse momentum dependent distribution and fragmentation functions of unpolarized quarks inside unpolarized protons are extracted, for the first time, through a simultaneous analysis of semi-inclusive deep-inelastic scattering, Drell-Yan and Z boson hadroproduction processes. This study is performed at leading order in perturbative QCD, with energy scale evolution at the next-to-leading logarithmic accuracy. Moreover, some specific choices are made to deal with low scale evolution around 1 GeV2. Since only data in the low transverse momentum region are considered, no matching to fixed-order calculations at high transverse momentum is needed.
Microlensing makes lensed quasar time delays significantly time variable
NASA Astrophysics Data System (ADS)
Tie, S. S.; Kochanek, C. S.
2018-01-01
The time delays of gravitationally lensed quasars are generally believed to be unique numbers whose measurement is limited only by the quality of the light curves and the models for the contaminating contribution of gravitational microlensing to the light curves. This belief is incorrect - gravitational microlensing also produces changes in the actual time delays on the ∼day(s) light-crossing time-scale of the emission region. This is due to a combination of the inclination of the disc relative to the line of sight and the differential magnification of the temperature fluctuations producing the variability. We demonstrate this both mathematically and with direct calculations using microlensing magnification patterns. Measuring these delay fluctuations can provide a physical scale for microlensing observations, removing the need for priors on either the microlens masses or the component velocities. That time delays in lensed quasars are themselves time variable likely explains why repeated delay measurements of individual lensed quasars appear to vary by more than their estimated uncertainties. This effect is also a new important systematic problem for attempts to use time delays in lensed quasars for cosmology or to detect substructures (satellites) in lens galaxies.
NASA Astrophysics Data System (ADS)
Jackisch, Conrad; Angermann, Lisa; Allroggen, Niklas; Sprenger, Matthias; Blume, Theresa; Tronicke, Jens; Zehe, Erwin
2017-07-01
The study deals with the identification and characterization of rapid subsurface flow structures through pedo- and geo-physical measurements and irrigation experiments at the point, plot and hillslope scale. Our investigation of flow-relevant structures and hydrological responses refers to the general interplay of form and function, respectively. To obtain a holistic picture of the subsurface, a large set of different laboratory, exploratory and experimental methods was used at the different scales. For exploration these methods included drilled soil core profiles, in situ measurements of infiltration capacity and saturated hydraulic conductivity, and laboratory analyses of soil water retention and saturated hydraulic conductivity. The irrigation experiments at the plot scale were monitored through a combination of dye tracer, salt tracer, soil moisture dynamics, and 3-D time-lapse ground penetrating radar (GPR) methods. At the hillslope scale the subsurface was explored by a 3-D GPR survey. A natural storm event and an irrigation experiment were monitored by a dense network of soil moisture observations and a cascade of 2-D time-lapse GPR trenches
. We show that the shift between activated and non-activated state of the flow paths is needed to distinguish structures from overall heterogeneity. Pedo-physical analyses of point-scale samples are the basis for sub-scale structure inference. At the plot and hillslope scale 3-D and 2-D time-lapse GPR applications are successfully employed as non-invasive means to image subsurface response patterns and to identify flow-relevant paths. Tracer recovery and soil water responses from irrigation experiments deliver a consistent estimate of response velocities. The combined observation of form and function under active conditions provides the means to localize and characterize the structures (this study) and the hydrological processes (companion study Angermann et al., 2017, this issue).
Mandák, Bohumil; Hadincová, Věroslava; Mahelka, Václav; Wildová, Radka
2013-01-01
Background North American Pinus strobus is a highly invasive tree species in Central Europe. Using ten polymorphic microsatellite loci we compared various aspects of the large-scale genetic diversity of individuals from 30 sites in the native distribution range with those from 30 sites in the European adventive distribution range. To investigate the ascertained pattern of genetic diversity of this intercontinental comparison further, we surveyed fine-scale genetic diversity patterns and changes over time within four highly invasive populations in the adventive range. Results Our data show that at the large scale the genetic diversity found within the relatively small adventive range in Central Europe, surprisingly, equals the diversity found within the sampled area in the native range, which is about thirty times larger. Bayesian assignment grouped individuals into two genetic clusters separating North American native populations from the European, non-native populations, without any strong genetic structure shown over either range. In the case of the fine scale, our comparison of genetic diversity parameters among the localities and age classes yielded no evidence of genetic diversity increase over time. We found that SGS differed across age classes within the populations under study. Old trees in general completely lacked any SGS, which increased over time and reached its maximum in the sapling stage. Conclusions Based on (1) the absence of difference in genetic diversity between the native and adventive ranges, together with the lack of structure in the native range, and (2) the lack of any evidence of any temporal increase in genetic diversity at four highly invasive populations in the adventive range, we conclude that population amalgamation probably first happened in the native range, prior to introduction. In such case, there would have been no need for multiple introductions from previously isolated populations, but only several introductions from genetically diverse populations. PMID:23874648
Manning-Walsh, Juanita
2005-01-01
To examine relationships between symptom distress and quality of life when religious support and personal support were introduced as mediating variables. Cross-sectional, correlational. Internet recruitment following university institutional review board approval. Mailed questionnaires from 100 women with breast cancer, mean age 46, length of time since surgery 1 to 24 months, predominantly White. Symptom Distress Scale, Religious Support Scale, FACT-B, and Facit-Sp-12. Personal support was positively related to quality of life and partially mediated the effects of symptom distress. Religious support did not mediate symptom distress and was not directly related to quality of life. Social support from family members and friends helped to decrease the negative effects of symptoms on quality of life. This study underscores the need to continue to assess for symptom distress and adequacy of personal support throughout the cancer trajectory and to facilitate the garnering of support resources when needed.
Game management, context effects, and calibration: the case of yellow cards in soccer.
Unkelbach, Christian; Memmert, Daniel
2008-02-01
Referees in German first-league soccer games do not award as many yellow cards in the beginning of a game as should be statistically expected. One explanation for this effect is the concept of game management (Mascarenhas, Collins, & Mortimer, 2002). Alternatively, the consistency model (Haubensak, 1992) explains the effect as a necessity of the judgment situation: Referees need to calibrate a judgment scale, and, to preserve degrees of freedom in that scale, they need to avoid extreme category judgments in the beginning (i.e., yellow cards). Experiment 1 shows that referees who judge scenes in the context of a game award fewer yellow cards than referees who see the same scenes in random order. Experiment 2 shows the combined influence of game management (by explicitly providing information about the game situation) and calibration (early vs. late scenes in the time course of a game). Theoretical implications for expert refereeing and referee training are discussed.
Ariew, André
2007-03-01
Charles Darwin, James Clerk Maxwell, and Francis Galton were all aware, by various means, of Aldolphe Quetelet's pioneering work in statistics. Darwin, Maxwell, and Galton all had reason to be interested in Quetelet's work: they were all working on some instance of how large-scale regularities emerge from individual events that vary from one another; all were rejecting the divine interventionistic theories of their contemporaries; and Quetelet's techniques provided them with a way forward. Maxwell and Galton all explicitly endorse Quetelet's techniques in their work; Darwin does not incorporate any of the statistical ideas of Quetelet, although natural selection post-twentieth century synthesis has. Why not Darwin? My answer is that by the time Darwin encountered Malthus's law of excess reproduction he had all he needed to answer about large scale regularities in extinctions, speciation, and adaptation. He didn't need Quetelet.
Scales and scaling in turbulent ocean sciences; physics-biology coupling
NASA Astrophysics Data System (ADS)
Schmitt, Francois
2015-04-01
Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.
Tokkaya, Sedefnur; Karayurt, Ozgül
2010-01-01
Breast cancer is the most frequent type of cancer among women in Turkey. Because of the high incidence of breast cancer, many women have family members who have experienced breast cancer. The aim of this study was to test the validity and reliability of the Information and Support Needs Questionnaire (ISNQ) for Turkish women, which was originally developed for use in women with primary relatives who had breast cancer. The study sample included 97 women whose primary female relatives had breast cancer. Data were collected with a Demographic Questionnaire and the ISNQ. The ISNQ was developed by Chalmers et al and was composed of 2 scales: the Importance Scale and the Needs Met Scale. Linguistic validity, translation, back translation, and content validity were tested with expert opinions. Item-to-total correlation scores ranged from 0.22 to 0.72 on the Importance Scale and from 0.23 to 0.60 on the Needs Met Scale. Cronbach alpha coefficients were.81 and.83 on the Importance Scale and the Needs Met Scale. The ISNQ, adapted into Turkish, was found to have sufficient validity and reliability. The questionnaire can be used to determine information and support needs of women whose primary relatives have breast cancer. Nurses and other health professionals can conduct interventions directed toward meeting information and support needs of women whose primary relatives have breast cancer.
Influence of spatial and temporal scales in identifying temperature extremes
NASA Astrophysics Data System (ADS)
van Eck, Christel M.; Friedlingstein, Pierre; Mulder, Vera L.; Regnier, Pierre A. G.
2016-04-01
Extreme heat events are becoming more frequent. Notable are severe heatwaves such as the European heatwave of 2003, the Russian heat wave of 2010 and the Australian heatwave of 2013. Surface temperature is attaining new maxima not only during the summer but also during the winter. The year of 2015 is reported to be a temperature record breaking year for both summer and winter. These extreme temperatures are taking their human and environmental toll, emphasizing the need for an accurate method to define a heat extreme in order to fully understand the spatial and temporal spread of an extreme and its impact. This research aims to explore how the use of different spatial and temporal scales influences the identification of a heat extreme. For this purpose, two near-surface temperature datasets of different temporal scale and spatial scale are being used. First, the daily ERA-Interim dataset of 0.25 degree and a time span of 32 years (1979-2010). Second, the daily Princeton Meteorological Forcing Dataset of 0.5 degree and a time span of 63 years (1948-2010). A temperature is considered extreme anomalous when it is surpassing the 90th, 95th, or the 99th percentile threshold based on the aforementioned pre-processed datasets. The analysis is conducted on a global scale, dividing the world in IPCC's so-called SREX regions developed for the analysis of extreme climate events. Pre-processing is done by detrending and/or subtracting the monthly climatology based on 32 years of data for both datasets and on 63 years of data for only the Princeton Meteorological Forcing Dataset. This results in 6 datasets of temperature anomalies from which the location in time and space of the anomalous warm days are identified. Comparison of the differences between these 6 datasets in terms of absolute threshold temperatures for extremes and the temporal and spatial spread of the extreme anomalous warm days show a dependence of the results on the datasets and methodology used. This stresses the need for a careful selection of data and methodology when identifying heat extremes.
NASA Astrophysics Data System (ADS)
Hackl, Jason F.
The relative dispersion of one uid particle with respect to another is fundamentally related to the transport and mixing of contaminant species in turbulent flows. The most basic consequence of Kolmogorov's 1941 similarity hypotheses for relative dispersion, the Richardson-Obukhov law that mean-square pair separation distance
NASA Astrophysics Data System (ADS)
Gómez, Breogán; Miguez-Macho, Gonzalo
2017-04-01
Nudging techniques are commonly used to constrain the evolution of numerical models to a reference dataset that is typically of a lower resolution. The nudged model retains some of the features of the reference field while incorporating its own dynamics to the solution. These characteristics have made nudging very popular in dynamic downscaling applications that cover from shot range, single case studies, to multi-decadal regional climate simulations. Recently, a variation of this approach called Spectral Nudging, has gained popularity for its ability to maintain the higher temporal and spatial variability of the model results, while forcing the large scales in the solution with a coarser resolution field. In this work, we focus on a not much explored aspect of this technique: the impact of selecting different cut-off wave numbers and spin-up times. We perform four-day long simulations with the WRF model, daily for three different one-month periods that include a free run and several Spectral Nudging experiments with cut-off wave numbers ranging from the smallest to the largest possible (full Grid Nudging). Results show that Spectral Nudging is very effective at imposing the selected scales onto the solution, while allowing the limited area model to incorporate finer scale features. The model error diminishes rapidly as the nudging expands over broader parts of the spectrum, but this decreasing trend ceases sharply at cut-off wave numbers equivalent to a length scale of about 1000 km, and the error magnitude changes minimally thereafter. This scale corresponds to the Rossby Radius of deformation, separating synoptic from convective scales in the flow. When nudging above this value is applied, a shifting of the synoptic patterns can occur in the solution, yielding large model errors. However, when selecting smaller scales, the fine scale contribution of the model is damped, thus making 1000 km the appropriate scale threshold to nudge in order to balance both effects. Finally, we note that longer spin-up times are needed for model errors to stabilize when using Spectral Nudging than with Grid Nudging. Our results suggest that this time is between 36 and 48 hours.
Pelentsov, Lemuel J; Fielder, Andrea L; Laws, Thomas A; Esterman, Adrian J
2016-01-01
Children and families affected by rare diseases have received scant consideration from the medical, scientific, and political communities, with parents' needs especially having received little attention. Affected parents often have limited access to information and support and appropriate health care services. While scales to measure the needs of parents of children with chronic illnesses have been developed, there have been no previous attempts to develop a scale to assess the needs of parents of children with rare diseases. To develop a scale for measuring the supportive care needs of parents of children with rare diseases. A total of 301 responses to our Parental Needs Survey were randomly divided into two halves, one for exploratory factor analysis and the other for confirmatory factor analysis (CFA). After removing unsuitable items, exploratory factor analysis was undertaken to determine the factor structure of the data. CFA using structural equation modeling was then undertaken to confirm the factor structure. Seventy-two items were entered into the CFA, with a scree plot showing a likely four-factor solution. The results provided four independent subscales of parental needs: Understanding the disease (four items); Working with health professionals (four items); Emotional issues (three items); and Financial needs (three items). The structural equation modeling confirmed the suitability of the four-factor solution and demonstrated that the four subscales could be added to provide an overall scale of parental need. This is the first scale developed to measure the supportive care needs of parents of children with rare diseases. The scale is suitable for use in surveys to develop policy, in individual clinical assessments, and, potentially, for evaluating new programs. Measuring the supportive care needs of parents caring for a child with a rare disease will hopefully lead to better physical and psychological health outcomes for parents and their affected children.
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf E.; Newbold, Denis
2014-01-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
Timed activity performance in persons with upper limb amputation: A preliminary study.
Resnik, Linda; Borgia, Mathew; Acluche, Frantzy
55 subjects with upper limb amputation were administered the T-MAP twice within one week. To develop a timed measure of activity performance for persons with upper limb amputation (T-MAP); examine the measure's internal consistency, test-retest reliability and validity; and compare scores by prosthesis use. Measures of activity performance for persons with upper limb amputation are needed The time required to perform daily activities is a meaningful metric that implication for participation in life roles. Internal consistency and test-retest reliability were evaluated. Construct validity was examined by comparing scores by amputation level. Exploratory analyses compared sub-group scores, and examined correlations with other measures. Scale alpha was 0.77, ICC was 0.93. Timed scores differed by amputation level. Subjects using a prosthesis took longer to perform all tasks. T-MAP was not correlated with other measures of dexterity or activity, but was correlated with pain for non-prosthesis users. The timed scale had adequate internal consistency and excellent test-retest reliability. Analyses support reliability and construct validity of the T-MAP. 2c "outcomes" research. Published by Elsevier Inc.
Gautestad, Arild O
2012-09-07
Animals moving under the influence of spatio-temporal scaling and long-term memory generate a kind of space-use pattern that has proved difficult to model within a coherent theoretical framework. An extended kind of statistical mechanics is needed, accounting for both the effects of spatial memory and scale-free space use, and put into a context of ecological conditions. Simulations illustrating the distinction between scale-specific and scale-free locomotion are presented. The results show how observational scale (time lag between relocations of an individual) may critically influence the interpretation of the underlying process. In this respect, a novel protocol is proposed as a method to distinguish between some main movement classes. For example, the 'power law in disguise' paradox-from a composite Brownian motion consisting of a superposition of independent movement processes at different scales-may be resolved by shifting the focus from pattern analysis at one particular temporal resolution towards a more process-oriented approach involving several scales of observation. A more explicit consideration of system complexity within a statistical mechanical framework, supplementing the more traditional mechanistic modelling approach, is advocated.
Towards a Near Real-Time Satellite-Based Flux Monitoring System for the MENA Region
NASA Astrophysics Data System (ADS)
Ershadi, A.; Houborg, R.; McCabe, M. F.; Anderson, M. C.; Hain, C.
2013-12-01
Satellite remote sensing has the potential to offer spatially and temporally distributed information on land surface characteristics, which may be used as inputs and constraints for estimating land surface fluxes of carbon, water and energy. Enhanced satellite-based monitoring systems for aiding local water resource assessments and agricultural management activities are particularly needed for the Middle East and North Africa (MENA) region. The MENA region is an area characterized by limited fresh water resources, an often inefficient use of these, and relatively poor in-situ monitoring as a result of sparse meteorological observations. To address these issues, an integrated modeling approach for near real-time monitoring of land surface states and fluxes at fine spatio-temporal scales over the MENA region is presented. This approach is based on synergistic application of multiple sensors and wavebands in the visible to shortwave infrared and thermal infrared (TIR) domain. The multi-scale flux mapping and monitoring system uses the Atmosphere-Land Exchange Inverse (ALEXI) model and associated flux disaggregation scheme (DisALEXI), and the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) in conjunction with model reanalysis data and multi-sensor remotely sensed data from polar orbiting (e.g. Landsat and MODerate resolution Imaging Spectroradiometer (MODIS)) and geostationary (MSG; Meteosat Second Generation) satellite platforms to facilitate time-continuous (i.e. daily) estimates of field-scale water, energy and carbon fluxes. Within this modeling system, TIR satellite data provide information about the sub-surface moisture status and plant stress, obviating the need for precipitation input and a detailed soil surface characterization (i.e. for prognostic modeling of soil transport processes). The STARFM fusion methodology blends aspects of high frequency (spatially coarse) and spatially fine resolution sensors and is applied directly to flux output fields to facilitate daily mapping of fluxes at sub-field scales. A complete processing infrastructure to automatically ingest and pre-process all required input data and to execute the integrated modeling system for near real-time agricultural monitoring purposes over targeted MENA sites is being developed, and initial results from this concerted effort will be discussed.
A new battery-charging method suggested by molecular dynamics simulations.
Abou Hamad, Ibrahim; Novotny, M A; Wipf, D O; Rikvold, P A
2010-03-20
Based on large-scale molecular dynamics simulations, we propose a new charging method that should be capable of charging a lithium-ion battery in a fraction of the time needed when using traditional methods. This charging method uses an additional applied oscillatory electric field. Our simulation results show that this charging method offers a great reduction in the average intercalation time for Li(+) ions, which dominates the charging time. The oscillating field not only increases the diffusion rate of Li(+) ions in the electrolyte but, more importantly, also enhances intercalation by lowering the corresponding overall energy barrier.
Decontamination of chemical tracers in droplets by a submerging thin film flow
NASA Astrophysics Data System (ADS)
Landel, Julien R.; McEvoy, Harry; Dalziel, Stuart B.
2016-11-01
We investigate the decontamination of chemical tracers contained in small viscous drops by a submerging falling film. This problem has applications in the decontamination of hazardous chemicals, following accidental releases or terrorist attacks. Toxic droplets lying on surfaces are cleaned by spraying a liquid decontaminant over the surface. The decontaminant film submerges the droplets, without detaching them, in order to neutralize toxic chemicals in the droplets. The decontamination process is controlled by advection, diffusion and reaction processes near the drop-film interface. Chemical tracers dissolve into the film flow forming a thin diffusive boundary layer at the interface. The chemical tracers are then neutralized through a reaction with a chemical decontaminant transported in the film. We assume in this work that the decontamination process occurs mainly in the film phase owing to low solubility of the decontaminant in the drop phase. We analyze the impact of the reaction time scale, assuming first-order reaction, in relation with the characteristic advection and diffusion time scales in the case of a single droplet. Using theoretical, numerical and experimental means, we find that the reaction time scale need to be significantly smaller than the characteristic time scale in the diffusive boundary layer in order to enhance noticeably the decontamination of a single toxic droplet. We discuss these results in the more general case of the decontamination of a large number of droplets. This material is based upon work supported by the Defense Threat Reduction Agency under Contract No. HDTRA1-12-D-0003-0001.
Scaling forecast models for wind turbulence and wind turbine power intermittency
NASA Astrophysics Data System (ADS)
Duran Medina, Olmo; Schmitt, Francois G.; Calif, Rudy
2017-04-01
The intermittency of the wind turbine power remains an important issue for the massive development of this renewable energy. The energy peaks injected in the electric grid produce difficulties in the energy distribution management. Hence, a correct forecast of the wind power in the short and middle term is needed due to the high unpredictability of the intermittency phenomenon. We consider a statistical approach through the analysis and characterization of stochastic fluctuations. The theoretical framework is the multifractal modelisation of wind velocity fluctuations. Here, we consider three wind turbine data where two possess a direct drive technology. Those turbines are producing energy in real exploitation conditions and allow to test our forecast models of power production at a different time horizons. Two forecast models were developed based on two physical principles observed in the wind and the power time series: the scaling properties on the one hand and the intermittency in the wind power increments on the other. The first tool is related to the intermittency through a multifractal lognormal fit of the power fluctuations. The second tool is based on an analogy of the power scaling properties with a fractional brownian motion. Indeed, an inner long-term memory is found in both time series. Both models show encouraging results since a correct tendency of the signal is respected over different time scales. Those tools are first steps to a search of efficient forecasting approaches for grid adaptation facing the wind energy fluctuations.
Craving and relapse measurement in alcoholism.
Potgieter, A S; Deckers, F; Geerlings, P
1999-01-01
This paper attempts to summarize the measurement of craving with four different craving instruments and to relate this to definitions and measurement of relapse. The definitions of relapse may vary between studies and researchers, but are usually well defined. Five commonly used methods to measure relapse are: (1) quantity/frequency of drinking; (2) cumulative duration of abstinence (CDA); (3) post-withdrawal abstinent period; (4) stable recovery period; (5) the time line follow-back method. The definition of craving is much less clear and is mostly described as an emotional-motivational state or as obsessive-compulsive behaviour. Four self-rating instruments are briefly discussed and compared: the Obsessive-Compulsive Drinking Scale, OCDS, the Lübeck Craving Scale, LCRR, the Alcohol Craving Questionnaire, ACQ-Now-SF-R, and ordinal scales (e.g. visual analogue, Likert, or verbal descriptive scales). These instruments measure different aspects or dimensions of craving over different periods. The different dimensions measured suggest that there is still a need to conceptualize a standard interpretation of the word craving. There is a need also to measure an emotional-motivational dimension, a cognitive-behavioural dimension, expectancies, and effects on positive and negative reinforcement with different instruments or with one multidimensional instrument. It is suggested that different patients are expected to have different craving profiles.
Setting the scene for SWOT: global maps of river reach hydrodynamic variables
NASA Astrophysics Data System (ADS)
Schumann, Guy J.-P.; Durand, Michael; Pavelsky, Tamlin; Lion, Christine; Allen, George
2017-04-01
Credible and reliable characterization of discharge from the Surface Water and Ocean Topography (SWOT) mission using the Manning-based algorithms needs a prior estimate constraining reach-scale channel roughness, base flow and river bathymetry. For some places, any one of those variables may exist locally or even regionally as a measurement, which is often only at a station, or sometimes as a basin-wide model estimate. However, to date none of those exist at the scale required for SWOT and thus need to be mapped at a continental scale. The prior estimates will be employed for producing initial discharge estimates, which will be used as starting-guesses for the various Manning-based algorithms, to be refined using the SWOT measurements themselves. A multitude of reach-scale variables were derived, including Landsat-based width, SRTM slope and accumulation area. As a possible starting point for building the prior database of low flow, river bathymetry and channel roughness estimates, we employed a variety of sources, including data from all GRDC records, simulations from the long-time runs of the global water balance model (WBM), and reach-based calculations from hydraulic geometry relationships as well as Manning's equation. Here, we present the first global maps of this prior database with some initial validation, caveats and prospective uses.
Resilience scales of a dammed tropical river
NASA Astrophysics Data System (ADS)
Calamita, Elisa; Schmid, Martin; Wehrli, Bernhard
2017-04-01
Artificial river impoundments disrupt the seasonality and dynamics of thermal, chemical, morphological and ecological regimes in river systems. These alterations affect the aquatic ecosystems in space and time and specifically modify the seasonality and the longitudinal gradients of important biogeochemical processes. Resilience of river systems to anthropogenic stressors enables their recovery along the flow path; however little is known about the longitudinal distance that rivers need to partially restore their physical, chemical and biological integrity. In this study, the concept of a "resilience scale" will be explored for different water quality parameters downstream of Kariba dam, the largest artificial lake in the Zambezi basin (South-East Africa). The goal of this project is to develop a modelling framework to investigate and quantify the impact of large dams on downstream water quality in tropical context. In particular, we aim to assess the degree of reversibility of the main downstream alterations (temperature, oxygen, nutrients) and consequently the quantification of their longitudinal extent. Coupling in-situ measurements with hydraulic and hydrological parameters such as travel times, will allow us to define a physically-based parametrization of the different resilience scales for tropical rivers. The results will be used for improving future dam management at the local scale and assessing the ecological impact of planned dams at the catchment scale.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant
Moreno-Garcia, Isabel M.; Palacios-Garcia, Emilio J.; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J.; Varo-Martinez, Marta; Real-Calvo, Rafael J.
2016-01-01
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant’s components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid. PMID:27240365
NASA Astrophysics Data System (ADS)
Slater, L. D.; Robinson, J.; Weller, A.; Keating, K.; Robinson, T.; Parker, B. L.
2017-12-01
Geophysical length scales determined from complex conductivity (CC) measurements can be used to estimate permeability k when the electrical formation factor F describing the ratio between tortuosity and porosity is known. Two geophysical length scales have been proposed: [1] the imaginary conductivity σ" normalized by the specific polarizability cp; [2] the time constant τ multiplied by a diffusion coefficient D+. The parameters cp and D+ account for the control of fluid chemistry and/or varying minerology on the geophysical length scale. We evaluated the predictive capability of two recently presented CC permeability models: [1] an empirical formulation based on σ"; [2] a mechanistic formulation based on τ;. The performance of the CC models was evaluated against measured permeability; this performance was also compared against that of well-established k estimation equations that use geometric length scales to represent the pore scale properties controlling fluid flow. Both CC models predict permeability within one order of magnitude for a database of 58 sandstone samples, with the exception of those samples characterized by high pore volume normalized surface area Spor and more complex mineralogy including significant dolomite. Variations in cp and D+ likely contribute to the poor performance of the models for these high Spor samples. The ultimate value of such geophysical models for permeability prediction lies in their application to field scale geophysical datasets. Two observations favor the implementation of the σ" based model over the τ based model for field-scale estimation: [1] the limited range of variation in cp relative to D+; [2] σ" is readily measured using field geophysical instrumentation (at a single frequency) whereas τ requires broadband spectral measurements that are extremely challenging and time consuming to accurately measure in the field. However, the need for a reliable estimate of F remains a major obstacle to the field-scale implementation of either of the CC permeability models for k estimation.
NASA Astrophysics Data System (ADS)
Leutwyler, David; Fuhrer, Oliver; Cumming, Benjamin; Lapillonne, Xavier; Gysi, Tobias; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph
2014-05-01
The representation of moist convection is a major shortcoming of current global and regional climate models. State-of-the-art global models usually operate at grid spacings of 10-300 km, and therefore cannot fully resolve the relevant upscale and downscale energy cascades. Therefore parametrization of the relevant sub-grid scale processes is required. Several studies have shown that this approach entails major uncertainties for precipitation processes, which raises concerns about the model's ability to represent precipitation statistics and associated feedback processes, as well as their sensitivities to large-scale conditions. Further refining the model resolution to the kilometer scale allows representing these processes much closer to first principles and thus should yield an improved representation of the water cycle including the drivers of extreme events. Although cloud-resolving simulations are very useful tools for climate simulations and numerical weather prediction, their high horizontal resolution and consequently the small time steps needed, challenge current supercomputers to model large domains and long time scales. The recent innovations in the domain of hybrid supercomputers have led to mixed node designs with a conventional CPU and an accelerator such as a graphics processing unit (GPU). GPUs relax the necessity for cache coherency and complex memory hierarchies, but have a larger system memory-bandwidth. This is highly beneficial for low compute intensity codes such as atmospheric stencil-based models. However, to efficiently exploit these hybrid architectures, climate models need to be ported and/or redesigned. Within the framework of the Swiss High Performance High Productivity Computing initiative (HP2C) a project to port the COSMO model to hybrid architectures has recently come to and end. The product of these efforts is a version of COSMO with an improved performance on traditional x86-based clusters as well as hybrid architectures with GPUs. We present our redesign and porting approach as well as our experience and lessons learned. Furthermore, we discuss relevant performance benchmarks obtained on the new hybrid Cray XC30 system "Piz Daint" installed at the Swiss National Supercomputing Centre (CSCS), both in terms of time-to-solution as well as energy consumption. We will demonstrate a first set of short cloud-resolving climate simulations at the European-scale using the GPU-enabled COSMO prototype and elaborate our future plans on how to exploit this new model capability.
A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet
ERIC Educational Resources Information Center
Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.
2006-01-01
Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…
NASA Technical Reports Server (NTRS)
Macdonald, R. B.; Houston, A. G.; Heydorn, R. P.; Botkin, D. B.; Estes, J. E.; Strahler, A. H.
1981-01-01
An attempt is made to identify the need for, and the current capability of, a technology which could aid in monitoring the Earth's vegetation resource on a global scale. Vegetation is one of our most critical natural resources, and accurate timely information on its current status and temporal dynamics is essential to understand many basic and applied environmental interrelationships which exist on the small but complex planet Earth.
Systolic VLSI Reed-Solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.
1986-01-01
Decoder for digital communications provides high-speed, pipelined ReedSolomon (RS) error-correction decoding of data streams. Principal new feature of proposed decoder is modification of Euclid greatest-common-divisor algorithm to avoid need for time-consuming computations of inverse of certain Galois-field quantities. Decoder architecture suitable for implementation on very-large-scale integrated (VLSI) chips with negative-channel metaloxide/silicon circuitry.
ERIC Educational Resources Information Center
Schockaert, Frederik
2014-01-01
School districts at times need to implement structural and programmatic changes requiring students to attend a different school, which tends to elicit strong parental emotions. This qualitative study analyzes how parents in one suburban Rhode Island district responded to a large-scale redistricting at the elementary level in order to (a) attain a…
ERIC Educational Resources Information Center
Yamashita, Lina; Hayes, Kathryn; Trexler, Cary J.
2017-01-01
In response to the increasing recognition of the need for sustainable food systems, research on students' and educators' knowledge of food systems and sustainability more broadly has grown but has generally focused on what people "fail" to understand. Moving away from this deficit approach, the present study used semi-structured…
A proper metaphysics for cognitive performance.
Van Orden, Guy C; Moreno, Miguel A; Holden, John G
2003-01-01
The general failure to individuate component causes in cognitive performance suggests the need for an alternative metaphysics. The metaphysics of control hierarchy theory accommodates the fact of self-organization in nature and the possibility that intentional actions are self-organized. One key assumption is that interactions among processes dominate their intrinsic dynamics. Scaling relations in response time variability motivate this assumption in cognitive performance.
Robert E. Keane; Rachel A. Loehman; Lisa M. Holsinger
2011-01-01
Fire management faces important emergent issues in the coming years such as climate change, fire exclusion impacts, and wildland-urban development, so new, innovative means are needed to address these challenges. Field studies, while preferable and reliable, will be problematic because of the large time and space scales involved. Therefore, landscape simulation...
Multiscale modeling and distributed computing to predict cosmesis outcome after a lumpectomy
NASA Astrophysics Data System (ADS)
Garbey, M.; Salmon, R.; Thanoon, D.; Bass, B. L.
2013-07-01
Surgery for early stage breast carcinoma is either total mastectomy (complete breast removal) or surgical lumpectomy (only tumor removal). The lumpectomy or partial mastectomy is intended to preserve a breast that satisfies the woman's cosmetic, emotional and physical needs. But in a fairly large number of cases the cosmetic outcome is not satisfactory. Today, predicting that surgery outcome is essentially based on heuristic. Modeling such a complex process must encompass multiple scales, in space from cells to tissue, as well as in time, from minutes for the tissue mechanics to months for healing. The goal of this paper is to present a first step in multiscale modeling of the long time scale prediction of breast shape after tumor resection. This task requires coupling very different mechanical and biological models with very different computing needs. We provide a simple illustration of the application of heterogeneous distributed computing and modular software design to speed up the model development. Our computational framework serves currently to test hypothesis on breast tissue healing in a pilot study with women who have been elected to undergo BCT and are being treated at the Methodist Hospital in Houston, TX.
Fan, Kai; Zhang, Min; Mujumdar, Arun S
2018-01-10
Microwave heating has been applied in the drying of high-value solids as it affords a number of advantages, including shorter drying time and better product quality. Freeze-drying at cryogenic temperature and extremely low pressure provides the advantage of high product quality, but at very high capital and operating costs due partly to very long drying time. Freeze-drying coupled with a microwave heat source speeds up the drying rate and yields good quality products provided the operating unit is designed and operated to achieve the potential for an absence of hot spot developments. This review is a survey of recent developments in the modeling and experimental results on microwave-assisted freeze-drying (MFD) over the past decade. Owing to the high costs involved, so far all applications are limited to small-scale operations for the drying of high-value foods such as fruits and vegetables. In order to promote industrial-scale applications for a broader range of products further research and development efforts are needed to offset the current limitations of the process. The needs and opportunities for future research and developments are outlined.
Biogeochemical Response to Mesoscale Physical Forcing in the California Current System
NASA Technical Reports Server (NTRS)
Niiler, Pearn P.; Letelier, Ricardo; Moisan, John R.; Marra, John A. (Technical Monitor)
2001-01-01
In the first part of the project, we investigated the local response of the coastal ocean ecosystems (changes in chlorophyll, concentration and chlorophyll, fluorescence quantum yield) to physical forcing by developing and deploying Autonomous Drifting Ocean Stations (ADOS) within several mesoscale features along the U.S. west coast. Also, we compared the temporal and spatial variability registered by sensors mounted in the drifters to that registered by the sensors mounted in the satellites in order to assess the scales of variability that are not resolved by the ocean color satellite. The second part of the project used the existing WOCE SVP Surface Lagrangian drifters to track individual water parcels through time. The individual drifter tracks were used to generate multivariate time series by interpolating/extracting the biological and physical data fields retrieved by remote sensors (ocean color, SST, wind speed and direction, wind stress curl, and sea level topography). The individual time series of the physical data (AVHRR, TOPEX, NCEP) were analyzed against the ocean color (SeaWiFS) time-series to determine the time scale of biological response to the physical forcing. The results from this part of the research is being used to compare the decorrelation scales of chlorophyll from a Lagrangian and Eulerian framework. The results from both parts of this research augmented the necessary time series data needed to investigate the interactions between the ocean mesoscale features, wind, and the biogeochemical processes. Using the historical Lagrangian data sets, we have completed a comparison of the decorrelation scales in both the Eulerian and Lagrangian reference frame for the SeaWiFS data set. We are continuing to investigate how these results might be used in objective mapping efforts.
Between-Trial Forgetting Due to Interference and Time in Motor Adaptation.
Kim, Sungshin; Oh, Youngmin; Schweighofer, Nicolas
2015-01-01
Learning a motor task with temporally spaced presentations or with other tasks intermixed between presentations reduces performance during training, but can enhance retention post training. These two effects are known as the spacing and contextual interference effect, respectively. Here, we aimed at testing a unifying hypothesis of the spacing and contextual interference effects in visuomotor adaptation, according to which forgetting between trials due to either spaced presentations or interference by another task will promote between-trial forgetting, which will depress performance during acquisition, but will promote retention. We first performed an experiment with three visuomotor adaptation conditions: a short inter-trial-interval (ITI) condition (SHORT-ITI); a long ITI condition (LONG-ITI); and an alternating condition with two alternated opposite tasks (ALT), with the same single-task ITI as in LONG-ITI. In the SHORT-ITI condition, there was fastest increase in performance during training and largest immediate forgetting in the retention tests. In contrast, in the ALT condition, there was slowest increase in performance during training and little immediate forgetting in the retention tests. Compared to these two conditions, in the LONG-ITI, we found intermediate increase in performance during training and intermediate immediate forgetting. To account for these results, we fitted to the data six possible adaptation models with one or two time scales, and with interference in the fast, or in the slow, or in both time scales. Model comparison confirmed that two time scales and some degree of interferences in either time scale are needed to account for our experimental results. In summary, our results suggest that retention following adaptation is modulated by the degree of between-trial forgetting, which is due to time-based decay in single adaptation task and interferences in multiple adaptation tasks.
Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure.
Epp, Tyler; Svecova, Dagmar; Cha, Young-Jin
2018-03-29
Structural Health Monitoring (SHM) has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN) application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC) structures.
Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs
NASA Astrophysics Data System (ADS)
Gratadour, Damien
2011-09-01
Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.
Wikehult, B; Willebrand, M; Kildal, M; Lannerstam, K; Fugl-Meyer, A R; Ekselius, L; Gerdin, B
2005-08-05
The aim of the study was to evaluate which factors are associated with the use of healthcare a long time after severe burn injury. After a review process based on clinical reasoning, 69 former burn patients out of a consecutive group treated at the Uppsala Burn Unit from 1980--1995 were visited in their homes and their use of care and support was assessed in a semi-structured interview. Post-burn health was assessed with the Burn-Specific Health Scale-Brief (BSHS-B) and personality was assessed with the Swedish universities Scales of Personality (SSP). The participants were injured on average eight years previously. Thirty-four had current contact with healthcare due to their burn injury and had significantly lower scores on three BSHS-B-domains: Simple Abilities, Work and Hand function, and significantly higher scores for the SSP-domain Neuroticism and the SSP-scales Stress Susceptibility, Lack of Assertiveness, and lower scores for Social Desirability. There was no relation to age, gender, time since injury, length of stay, or to the surface area burned. A routine screening of personality traits as a supplement to long-term follow-ups may help in identifying the patient's need for care.
Event Horizon Telescope observations as probes for quantum structure of astrophysical black holes
NASA Astrophysics Data System (ADS)
Giddings, Steven B.; Psaltis, Dimitrios
2018-04-01
The need for a consistent quantum evolution for black holes has led to proposals that their semiclassical description is modified not just near the singularity, but at horizon or larger scales. If such modifications extend beyond the horizon, they influence regions accessible to distant observation. Natural candidates for these modifications behave like metric fluctuations, with characteristic length scales and timescales set by the horizon radius. We investigate the possibility of using the Event Horizon Telescope to observe these effects, if they have a strength sufficient to make quantum evolution consistent with unitarity, without introducing new scales. We find that such quantum fluctuations can introduce a strong time dependence for the shape and size of the shadow that a black hole casts on its surrounding emission. For the black hole in the center of the Milky Way, detecting the rapid time variability of its shadow will require nonimaging timing techniques. However, for the much larger black hole in the center of the M87 galaxy, a variable black-hole shadow, if present with these parameters, would be readily observable in the individual snapshots that will be obtained by the Event Horizon Telescope.
Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure
Epp, Tyler; Svecova, Dagmar; Cha, Young-Jin
2018-01-01
Structural Health Monitoring (SHM) has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN) application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC) structures. PMID:29596332
Long-term wave measurements in a climate change perspective.
NASA Astrophysics Data System (ADS)
Pomaro, Angela; Bertotti, Luciana; Cavaleri, Luigi; Lionello, Piero; Portilla-Yandun, Jesus
2017-04-01
At present multi-decadal time series of wave data needed for climate studies are generally provided by long term model simulations (hindcasts) covering the area of interest. Examples, among many, at different scales are wave hindcasts adopting the wind fields of the ERA-Interim reanalysis of the European Centre for Medium-Range Weather Forecasts (ECMWF, Reading, U.K.) at the global level and by regional re-analysis as for the Mediterranean Sea (Lionello and Sanna, 2006). Valuable as they are, these estimates are necessarily affected by the approximations involved, the more so because of the problems encountered within modelling processes in small basins using coarse resolution wind fields (Cavaleri and Bertotti, 2004). On the contrary, multi-decadal observed time series are rare. They have the evident advantage of somehow representing the real evolution of the waves, without the shortcomings associated with the limitation of models in reproducing the actual processes and the real variability within the wave fields. Obviously, observed wave time series are not exempt of problems. They represent a very local information, hence their use to describe the wave evolution at large scale is sometimes arguable and, in general, it needs the support of model simulations assessing to which extent the local value is representative of a large scale evolution. Local effects may prevent the identification of trends that are indeed present at large scale. Moreover, a regular maintenance, accurate monitoring and metadata information are crucial issues when considering the reliability of a time series for climate applications. Of course, where available, especially if for several decades, measured data are of great value for a number of reasons and can be valuable clues to delve further into the physics of the processes of interest, especially if considering that waves, as an integrated product of the local climate, if available in an area sensitive to even limited changes of the large scale pattern, can provide related compact and meaningful information. In addition, the availability for the area of interest of a 20-year long dataset of directional spectra (in frequency and direction) offers an independent, but theoretically corresponding and significantly long dataset, allowing to penetrate the wave problem through different perspectives. In particular, we investigate the contribution of the individual wave systems that modulate the variability of waves in the Adriatic Sea. A characterization of wave conditions based on wave spectra in fact brings out a more detailed description of the different wave regimes, their associated meteorological conditions and their variation in time and geographical space.
Kratz, Anna L; Chadd, Edmund; Jensen, Mark P; Kehn, Matthew; Kroll, Thilo
2015-07-01
To examine the psychometric properties of the Community Integration Questionnaire (CIQ) in large samples of individuals with spinal cord injury (SCI). Longitudinal 12-month survey study. Nation-wide, community dwelling. Adults with SCI: 627 at Time 1, 494 at Time 2. Not applicable. The CIQ is a 15-item measure developed to measure three domains of community integration in individuals with traumatic brain injury: home integration, social integration, and productive activity. SCI consumer input suggested the need for two additional items assessing socializing at home and internet/email activity. Exploratory factor analyses at Time 1 indicated three factors. Time 2 confirmatory factor analysis did not show a good fit of the 3-factor model. CIQ scores were normally distributed and only the Productive subscale demonstrated problems with high (25%) ceiling effects. Internal reliability was acceptable for the Total and Home scales, but low for the Social and Productive activity scales. Validity of the CIQ is suggested by significant differences by sex, age, and wheelchair use. The factor structure of the CIQ was not stable over time. The CIQ may be most useful for assessing home integration, as this is the subscale with the most scale stability and internal reliability. The CIQ may be improved for use in SCI by including items that reflect higher levels of productive functioning, integration across the life span, and home- and internet-based social functioning.
Lagrangian Statistics of Slightly Buoyant Droplets in Isotropic Turbulence
NASA Astrophysics Data System (ADS)
Gopalan, Balaji; Malkiel, Edwin; Katz, Joseph
2006-11-01
This project examines the dynamics of slightly buoyant diesel droplets in isotropic turbulence using high speed in-line digital Holographic PIV. A cloud of droplets with specific gravity of 0.85 is injected into the central portion of an isotropic turbulence facility. The droplet trajectories are measured in a 50x50x50 mm̂3 sample volume using high speed in-line digital holography. An automated program has been developed to obtain accurate time history of droplet velocities. Data analysis determines the PDF of velocity and acceleration in three dimensions. The time histories enable us to calculate the three dimensional Lagrangian velocity autocorrelation function, and from them the diffusion coefficients. Due to buoyancy the vertical diffusion time scale exceeds the horizontal one by about 65% .The diffusion coefficients vary between 2.8 cm̂2/sec in the horizontal direction to 5.5 cm̂2/sec in the vertical direction. For droplets with size varying from 2 to 11 Kolmogorov scales there are no clear trends with size. The variations of diffusion rates for different turbulent intensities and the effect of finite window size are presently examined. For shorter time scales, when the diffusion need not be Fickian the three dimensional trajectories can be used to calculate the generalized dispersion tensor and measure the time elapsed for diffusion to become Fickian.
Feedbacks in Human-Landscape Systems
NASA Astrophysics Data System (ADS)
Chin, Anne; Florsheim, Joan L.; Wohl, Ellen; Collins, Brian D.
2014-01-01
This article identifies key questions and challenges for geomorphologists in investigating coupled feedbacks in human-landscape systems. While feedbacks occur in the absence of human influences, they are also altered by human activity. Feedbacks are a key element to understanding human-influenced geomorphic systems in ways that extend our traditional approach of considering humans as unidirectional drivers of change. Feedbacks have been increasingly identified in Earth-environmental systems, with studies of coupled human-natural systems emphasizing ecological phenomena in producing emerging concepts for social-ecological systems. Enormous gaps or uncertainties in knowledge remain with respect to understanding impact-feedback loops within geomorphic systems with significant human alterations, where the impacted geomorphic systems in turn affect humans. Geomorphology should play an important role in public policy by identifying the many diffuse and subtle feedbacks of both local- and global-scale processes. This role is urgent, while time may still be available to mitigate the impacts that limit the sustainability of human societies. Challenges for geomorphology include identification of the often weak feedbacks that occur over varied time and space scales ranging from geologic time to single isolated events and very short time periods, the lack of available data linking impact with response, the identification of multiple thresholds that trigger feedback mechanisms, the varied tools and metrics needed to represent both physical and human processes, and the need to collaborate with social scientists with expertise in the human causes of geomorphic change, as well as the human responses to such change.
Proposed Approach to Stable High Beta Plasmas in ET
NASA Astrophysics Data System (ADS)
Taylor, R. J.; Carter, T. A.; Gauvreau, J.-L.; Gourdain, P.-A.; Grossman, A.; Lafonteese, D. J.; Pace, D. C.; Schmitz, L. W.
2003-10-01
Five second long plasmas have been produced in ET with ease. We need these long pulses to evolve high beta equilibria under controlled conditions. However, equilibrium control is lost to internal disruptions due to the development of giant sawteeth on the 1 second time scale. This time scale is approximately the central energy confinement time, while the central particle confinement time is much longer than 1 second. This persistent limitation is present in ohmic and ICRF heated discharges. MHD stable current profiles have been found using DCON(A.H. Glasser, private communication) but transport related phenomena like giant sawteeth and uncontrolled transport barrier evolution are not yet part of a simple stability study. We are advocating avoiding the evolution of giant sawtooth and conditions responsible for MHD instabilities as opposed to exploring their stabilization. This is equivalent to the statement that self-organized plasmas are in fact not welcome in long pulse tokamaks. We intend to prevent self-organization by the application of a multi-faceted ICRF strategy. The in house technology is ready but the approach needs to be artful and not preconceived. The flexibility built into the ET hardware is likely to help us to find a way to achieve global plasma control. It is essential that this work be pursued geared towards parameter performance and configuration control. Both require a significant commitment to understanding the device physics AND delivering on the engineering required for control and performance.
Cross scale interactions, nonlinearities, and forecasting catastrophic events
Peters, Debra P.C.; Pielke, Roger A.; Bestelmeyer, Brandon T.; Allen, Craig D.; Munson-McGee, Stuart; Havstad, Kris M.
2004-01-01
Catastrophic events share characteristic nonlinear behaviors that are often generated by cross-scale interactions and feedbacks among system elements. These events result in surprises that cannot easily be predicted based on information obtained at a single scale. Progress on catastrophic events has focused on one of the following two areas: nonlinear dynamics through time without an explicit consideration of spatial connectivity [Holling, C. S. (1992) Ecol. Monogr. 62, 447–502] or spatial connectivity and the spread of contagious processes without a consideration of cross-scale interactions and feedbacks [Zeng, N., Neeling, J. D., Lau, L. M. & Tucker, C. J. (1999) Science 286, 1537–1540]. These approaches rarely have ventured beyond traditional disciplinary boundaries. We provide an interdisciplinary, conceptual, and general mathematical framework for understanding and forecasting nonlinear dynamics through time and across space. We illustrate the generality and usefulness of our approach by using new data and recasting published data from ecology (wildfires and desertification), epidemiology (infectious diseases), and engineering (structural failures). We show that decisions that minimize the likelihood of catastrophic events must be based on cross-scale interactions, and such decisions will often be counterintuitive. Given the continuing challenges associated with global change, approaches that cross disciplinary boundaries to include interactions and feedbacks at multiple scales are needed to increase our ability to predict catastrophic events and develop strategies for minimizing their occurrence and impacts. Our framework is an important step in developing predictive tools and designing experiments to examine cross-scale interactions.
High-resolution downscaling for hydrological management
NASA Astrophysics Data System (ADS)
Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos
2017-04-01
Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management;
Initial validation of a healthcare needs scale for young people with congenital heart disease.
Chen, Chi-Wen; Ho, Ciao-Lin; Su, Wen-Jen; Wang, Jou-Kou; Chung, Hung-Tao; Lee, Pi-Chang; Lu, Chun-Wei; Hwang, Be-Tau
2018-01-01
To validate the initial psychometric properties of a Healthcare Needs Scale for Youth with Congenital Heart Disease. As the number of patients with congenital heart disease surviving to adulthood increases, the transitional healthcare needs for adolescents and young adults with congenital heart disease require investigation. However, few tools comprehensively identify the healthcare needs of youth with congenital heart disease. A cross-sectional study was employed to examine the psychometric properties of the Healthcare Needs Scale for Youth with Congenital Heart Disease. The sample consisted of 500 patients with congenital heart disease, aged 15-24 years, from paediatric cardiology departments and covered the period from March-August 2015. The patients completed the 25-item Healthcare Needs Scale for Youth with Congenital Heart Disease, the questionnaire on health needs for adolescents and the WHO Quality of Life-BREF. Reliability and construct, concurrent, predictive and known-group validity were examined. The Healthcare Needs Scale for Youth with Congenital Heart Disease includes three dimensions, namely health management, health policy and individual and interpersonal relationships, which consist of 25 items. It demonstrated excellent internal consistency and sound construct, concurrent, predictive and known-group validity. The Healthcare Needs Scale for Youth with Congenital Heart Disease is a psychometrically robust measure of the healthcare needs of youth with congenital heart disease. It has the potential to provide nurses with a means to assess and identify the concerns of youth with congenital heart disease and to help them achieve a successful transition to adult care. © 2017 John Wiley & Sons Ltd.
Forecasting the forest and the trees: consequences of drought in competitive forests
NASA Astrophysics Data System (ADS)
Clark, J. S.
2015-12-01
Models that translate individual tree responses to distribution and abundance of competing populations are needed to understand forest vulnerability to drought. Currently, biodiversity predictions rely on one scale or the other, but do not combine them. Synthesis is accomplished here by modeling data together, each with their respective scale-dependent connections to the scale needed for prediction—landscape to regional biodiversity. The approach we summarize integrates three scales, i) individual growth, reproduction, and survival, ii) size-species structure of stands, and iii) regional forest biomass. Data include 24,347 USDA Forest Inventory and Analysis (FIA) plots and 135 Long-term Forest Demography plots. Climate, soil moisture, and competitive interactions are predictors. We infer and predict the four-dimensional size/species/space/time (SSST) structure of forests, where all demographic rates respond to winter temperature, growing season length, moisture deficits, local moisture status, and competition. Responses to soil moisture are highly non-linear and not strongly related to responses to climatic moisture deficits over time. In the Southeast the species that are most sensitive to drought on dry sites are not the same as those that are most sensitive on moist sites. Those that respond most to spatial moisture gradients are not the same as those that respond most to regional moisture deficits. There is little evidence of simple tradeoffs in responses. Direct responses to climate constrain the ranges of few tree species, north or south; there is little evidence that range limits are defined by fecundity or survival responses to climate. By contrast, recruitment and the interactions between competition and drought that affect growth and survival are predicted to limit ranges of many species. Taken together, results suggest a rich interaction involving demographic responses at all size classes to neighbors, landscape variation in moisture, and regional climate change.
Parashos, Sotirios A; Luo, Sheng; Biglan, Kevin M; Bodis-Wollner, Ivan; He, Bo; Liang, Grace S; Ross, G Webster; Tilley, Barbara C; Shulman, Lisa M
2014-06-01
Optimizing assessments of rate of progression in Parkinson disease (PD) is important in designing clinical trials, especially of potential disease-modifying agents. To examine the value of measures of impairment, disability, and quality of life in assessing progression in early PD. Inception cohort analysis of data from 413 patients with early, untreated PD who were enrolled in 2 multicenter, randomized, double-blind clinical trials. Participants were randomly assigned to 1 of 5 treatments (67 received creatine, 66 received minocycline, 71 received coenzyme Q10, 71 received GPI-1485, and 138 received placebo). We assessed the association between the rates of change in measures of impairment, disability, and quality of life and time to initiation of symptomatic treatment. Time between baseline assessment and need for the initiation of symptomatic pharmaceutical treatment for PD was the primary indicator of disease progression. After adjusting for baseline confounding variables with regard to the Unified Parkinson's Disease Rating Scale (UPDRS) Part II score, the UPDRS Part III score, the modified Rankin Scale score, level of education, and treatment group, we assessed the rate of change for the following measurements: the UPDRS Part II score; the UPDRS Part III score; the Schwab and England Independence Scale score (which measures activities of daily living); the Total Functional Capacity scale; the 39-item Parkinson's Disease Questionnaire, summary index, and activities of daily living subscale; and version 2 of the 12-item Short Form Health Survey Physical Summary and Mental Summary. Variables reaching the statistical threshold in univariate analysis were entered into a multivariable Cox proportional hazards model using time to symptomatic treatment as the dependent variable. More rapid change (ie, worsening) in the UPDRS Part II score (hazard ratio, 1.15 [95% CI, 1.08-1.22] for 1 scale unit change per 6 months), the UPDRS Part III score (hazard ratio, 1.09 [95% CI, 1.06-1.13] for 1 scale unit change per 6 months), and the Schwab and England Independence Scale score (hazard ratio, 1.29 [95% CI, 1.12-1.48] for 5 percentage point change per 6 months) was associated with earlier need for symptomatic therapy. AND RELEVANCE In early PD, the UPDRS Part II score and Part III score and the Schwab and England Independence Scale score can be used to measure disease progression, whereas the 39-item Parkinson's Disease Questionnaire and summary index, Total Functional Capacity scale, and the 12-item Short Form Health Survey Physical Summary and Mental Summary are not sensitive to change. clinicaltrials.gov Identifiers: NCT00063193 and NCT00076492.
Wahida, Kihal-Talantikite; Padilla, Cindy M.; Denis, Zmirou-Navier; Olivier, Blanchard; Géraldine, Le Nir; Philippe, Quenel; Séverine, Deguen
2016-01-01
Many epidemiological studies examining long-term health effects of exposure to air pollutants have characterized exposure by the outdoor air concentrations at sites that may be distant to subjects’ residences at different points in time. The temporal and spatial mobility of subjects and the spatial scale of exposure assessment could thus lead to misclassification in the cumulative exposure estimation. This paper attempts to fill the gap regarding cumulative exposure assessment to air pollution at a fine spatial scale in epidemiological studies investigating long-term health effects. We propose a conceptual framework showing how major difficulties in cumulative long-term exposure assessment could be surmounted. We then illustrate this conceptual model on the case of exposure to NO2 following two steps: (i) retrospective reconstitution of NO2 concentrations at a fine spatial scale; and (ii) a novel approach to assigning the time-relevant exposure estimates at the census block level, using all available data on residential mobility throughout a 10- to 20-year period prior to that for which the health events are to be detected. Our conceptual framework is both flexible and convenient for the needs of different epidemiological study designs. PMID:26999170
Wahida, Kihal-Talantikite; Padilla, Cindy M; Denis, Zmirou-Navier; Olivier, Blanchard; Géraldine, Le Nir; Philippe, Quenel; Séverine, Deguen
2016-03-15
Many epidemiological studies examining long-term health effects of exposure to air pollutants have characterized exposure by the outdoor air concentrations at sites that may be distant to subjects' residences at different points in time. The temporal and spatial mobility of subjects and the spatial scale of exposure assessment could thus lead to misclassification in the cumulative exposure estimation. This paper attempts to fill the gap regarding cumulative exposure assessment to air pollution at a fine spatial scale in epidemiological studies investigating long-term health effects. We propose a conceptual framework showing how major difficulties in cumulative long-term exposure assessment could be surmounted. We then illustrate this conceptual model on the case of exposure to NO₂ following two steps: (i) retrospective reconstitution of NO₂ concentrations at a fine spatial scale; and (ii) a novel approach to assigning the time-relevant exposure estimates at the census block level, using all available data on residential mobility throughout a 10- to 20-year period prior to that for which the health events are to be detected. Our conceptual framework is both flexible and convenient for the needs of different epidemiological study designs.
Characterization of Two Ton NaI Scintillator
NASA Astrophysics Data System (ADS)
Maier, Alleta; Coherent Collaboration
2017-09-01
The COHERENT collaboration is dedicated to measuring Coherent Elastic Neutrino-Nucleus Scattering (CE νNS), an interaction predicted by the standard model that ultimately serves as a background floor for dark matter detection. In the pursuit of observing the N2 scaling predicted, COHERENT is deploying two tons of NaI[Tl] detector to observe CE νNS recoils of sodium nuclei. Before the two tons of this NaI[Tl] scintillator are deployed, however, all crystals and PMTs must be characterized to understand the individual properties vital to precision in the measurement of CE νNS. This detector is also expected to allow COHERENT to observe charged current and CE νNS interactions with 127I. A standard operating procedure is developed to characterize each detector based on seven properties relevant to precision in the measurement of CE νNS: energy scale, energy resolution, low-energy light yield non-linearity, decay time energy dependence, position variance, time variance, and background levels. Crystals will be tested and characterized for these properties in the context of a ton-scale NaI[Tl] detector. Preliminary development of the SOP has allowed for greater understanding of optimization methods needed for characterization for the ton scale detector. TUNL, NSF, Duke University.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less
Guadayol, Òscar; Silbiger, Nyssa J.; Donahue, Megan J.; Thomas, Florence I. M.
2014-01-01
Spatial and temporal environmental variability are important drivers of ecological processes at all scales. As new tools allow the in situ exploration of individual responses to fluctuations, ecologically meaningful ways of characterizing environmental variability at organism scales are needed. We investigated the fine-scale spatial heterogeneity of high-frequency temporal variability in temperature, dissolved oxygen concentration, and pH experienced by benthic organisms in a shallow coastal coral reef. We used a spatio-temporal sampling design, consisting of 21 short-term time-series located along a reef flat-to-reef slope transect, coupled to a long-term station monitoring water column changes. Spectral analyses revealed sharp gradients in variance decomposed by frequency, as well as differences between physically-driven and biologically-reactive parameters. These results highlight the importance of environmental variance at organismal scales and present a new sampling scheme for exploring this variability in situ. PMID:24416364
Scale-up and economic analysis of biodiesel production from municipal primary sewage sludge.
Olkiewicz, Magdalena; Torres, Carmen M; Jiménez, Laureano; Font, Josep; Bengoa, Christophe
2016-08-01
Municipal wastewater sludge is a promising lipid feedstock for biodiesel production, but the need to eliminate the high water content before lipid extraction is the main limitation for scaling up. This study evaluates the economic feasibility of biodiesel production directly from liquid primary sludge based on experimental data at laboratory scale. Computational tools were used for the modelling of the process scale-up and the different configurations of lipid extraction to optimise this step, as it is the most expensive. The operational variables with a major influence in the cost were the extraction time and the amount of solvent. The optimised extraction process had a break-even price of biodiesel of 1232 $/t, being economically competitive with the current cost of fossil diesel. The proposed biodiesel production process from waste sludge eliminates the expensive step of sludge drying, lowering the biodiesel price. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dependence of the friction strengthening of graphene on velocity.
Zeng, Xingzhong; Peng, Yitian; Liu, Lei; Lang, Haojie; Cao, Xing'an
2018-01-25
Graphene shows great potential applications as a solid lubricant in micro- and nanoelectromechanical systems (MEMS/NEMS). An atomic-scale friction strengthening effect in a few initial atomic friction periods usually occurred on few-layer graphene. Here, velocity dependent friction strengthening was observed in atomic-scale frictional behavior of graphene by atomic force microscopy (AFM). The degree of the friction strengthening decreases with the increase of velocity first and then reaches a plateau. This could be attributed to the interaction potential between the tip and graphene at high velocity which is weaker than that at low velocity, because the strong tip-graphene contact interface needs a longer time to evolve. The subatomic-scale stick-slip behavior in the conventional stick-slip motion supports the weak interaction between the tip and graphene at high velocity. These findings can provide a deeper understanding of the atomic-scale friction mechanism of graphene and other two-dimensional materials.
Scaling prospects in mechanical energy harvesting with piezo nanowires
NASA Astrophysics Data System (ADS)
Ardila, Gustavo; Hinchet, Ronan; Mouis, Mireille; Montès, Laurent
2013-07-01
The combination of 3D processing technologies, low power circuits and new materials integration makes it conceivable to build autonomous integrated systems, which would harvest their energy from the environment. In this paper, we focus on mechanical energy harvesting and discuss its scaling prospects toward the use of piezoelectric nanostructures, able to be integrated in a CMOS environment. It is shown that direct scaling of present MEMS-based methodologies would be beneficial for high-frequency applications only. For the range of applications which is presently foreseen, a different approach is needed, based on energy harvesting from direct real-time deformation instead of energy harvesting from vibration modes at or close to resonance. We discuss the prospects of such an approach based on simple scaling rules Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.
Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.
2011-01-01
Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.
Bladergroen, Marco R.; van der Burgt, Yuri E. M.
2015-01-01
For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071
Generalization Technique for 2D+SCALE Dhe Data Model
NASA Astrophysics Data System (ADS)
Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.
Mate, Kedar S; Ngidi, Wilbroda Hlolisile; Reddy, Jennifer; Mphatswe, Wendy; Rollins, Nigel; Barker, Pierre
2013-11-01
New approaches are needed to evaluate quality improvement (QI) within large-scale public health efforts. This case report details challenges to large-scale QI evaluation, and proposes solutions relying on adaptive study design. We used two sequential evaluative methods to study a QI effort to improve delivery of HIV preventive care in public health facilities in three districts in KwaZulu-Natal, South Africa, over a 3-year period. We initially used a cluster randomised controlled trial (RCT) design. During the RCT study period, tensions arose between intervention implementation and evaluation design due to loss of integrity of the randomisation unit over time, pressure to implement changes across the randomisation unit boundaries, and use of administrative rather than functional structures for the randomisation. In response to this loss of design integrity, we switched to a more flexible intervention design and a mixed-methods quasiexperimental evaluation relying on both a qualitative analysis and an interrupted time series quantitative analysis. Cluster RCT designs may not be optimal for evaluating complex interventions to improve implementation in uncontrolled 'real world' settings. More flexible, context-sensitive evaluation designs offer a better balance of the need to adjust the intervention during the evaluation to meet implementation challenges while providing the data required to evaluate effectiveness. Our case study involved HIV care in a resource-limited setting, but these issues likely apply to complex improvement interventions in other settings.
Climate and wildfires in the North American boreal forest.
Macias Fauria, Marc; Johnson, E A
2008-07-12
The area burned in the North American boreal forest is controlled by the frequency of mid-tropospheric blocking highs that cause rapid fuel drying. Climate controls the area burned through changing the dynamics of large-scale teleconnection patterns (Pacific Decadal Oscillation/El Niño Southern Oscillation and Arctic Oscillation, PDO/ENSO and AO) that control the frequency of blocking highs over the continent at different time scales. Changes in these teleconnections may be caused by the current global warming. Thus, an increase in temperature alone need not be associated with an increase in area burned in the North American boreal forest. Since the end of the Little Ice Age, the climate has been unusually moist and variable: large fire years have occurred in unusual years, fire frequency has decreased and fire-climate relationships have occurred at interannual to decadal time scales. Prolonged and severe droughts were common in the past and were partly associated with changes in the PDO/ENSO system. Under these conditions, large fire years become common, fire frequency increases and fire-climate relationships occur at decadal to centennial time scales. A suggested return to the drier climate regimes of the past would imply major changes in the temporal dynamics of fire-climate relationships and in area burned, a reduction in the mean age of the forest, and changes in species composition of the North American boreal forest.
Scientific and Technological Foundations for Scaling Production of Nanostructured Metals
NASA Astrophysics Data System (ADS)
Lowe, Terry C.; Davis, Casey F.; Rovira, Peter M.; Hayne, Mathew L.; Campbell, Gordon S.; Grzenia, Joel E.; Stock, Paige J.; Meagher, Rilee C.; Rack, Henry J.
2017-05-01
Severe Plastic Deformation (SPD) has been explored in a wide range of metals and alloys. However, there are only a few industrial scale implementations of SPD for commercial alloys. To demonstrate and evolve technology for producing ultrafine grain metals by SPD, a Nanostructured Metals Manufacturing Testbed (NMMT) has been established in Golden, Colorado. Machines for research scale and pilot scale Equal Channel Angular Pressing-Conform (ECAP-C) technology have been configured in the NMMT to systematically evaluate and evolve SPD processing and advance the foundational science and technology for manufacturing. We highlight the scientific and technological areas that are critical for scale up of continuous SPD of aluminum, copper, magnesium, titanium, and iron-based alloys. Key areas that we will address in this presentation include the need for comprehensive analysis of starting microstructures, data on operating deformation mechanisms, high pressure thermodynamics and phase transformation kinetics, tribological behaviors, temperature dependence of lubricant properties, adaptation of tolerances and shear intensity to match viscoplastic behaviors, real-time process monitoring, and mechanics of billet/tooling interactions.
Hogan, Craig
2017-12-22
Classical spacetime and quantum mass-energy form the basis of all of physics. They become inconsistent at the Planck scale, 5.4 times 10^{-44} seconds, which may signify a need for reconciliation in a unified theory. Although proposals for unified theories exist, a direct experimental probe of this scale, 16 orders of magnitude above Tevatron energy, has seemed hopelessly out of reach. However in a particular interpretation of holographic unified theories, derived from black hole evaporation physics, a world assembled out of Planck-scale waves displays effects of unification with a new kind of uncertainty in position at the Planck diffraction scale, the geometric mean of the Planck length and the apparatus size. In this case a new phenomenon may measurable, an indeterminacy of spacetime position that appears as noise in interferometers. The colloquium will discuss the theory of the effect, and our plans to build a holographic interferometer at Fermilab to measure it.
Scale and the representation of human agency in the modeling of agroecosystems
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; ...
2015-07-17
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less
Gauge-invariance and infrared divergences in the luminosity distance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biern, Sang Gyu; Yoo, Jaiyul, E-mail: sgbiern@physik.uzh.ch, E-mail: jyoo@physik.uzh.ch
2017-04-01
Measurements of the luminosity distance have played a key role in discovering the late-time cosmic acceleration. However, when accounting for inhomogeneities in the Universe, its interpretation has been plagued with infrared divergences in its theoretical predictions, which are in some cases used to explain the cosmic acceleration without dark energy. The infrared divergences in most calculations are artificially removed by imposing an infrared cut-off scale. We show that a gauge-invariant calculation of the luminosity distance is devoid of such divergences and consistent with the equivalence principle, eliminating the need to impose a cut-off scale. We present proper numerical calculations ofmore » the luminosity distance using the gauge-invariant expression and demonstrate that the numerical results with an ad hoc cut-off scale in previous calculations have negligible systematic errors as long as the cut-off scale is larger than the horizon scale. We discuss the origin of infrared divergences and their cancellation in the luminosity distance.« less
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
The role of ocean climate data in operational Naval oceanography
NASA Technical Reports Server (NTRS)
Chesbrough, Radm G.
1992-01-01
Local application of global-scale models describes the U.S. Navy's basic philosophy for operational oceanography in support of fleet operations. Real-time data, climatologies, coupled air/ocean models, and large scale computers are the essential components of the Navy's system for providing the war fighters with the performance predictions and tactical decision aids they need to operate safely and efficiently. In peacetime, these oceanographic predictions are important for safety of navigation and flight. The paucity and uneven distribution of real-time data mean we have to fall back on climatology to provide the basic data to operate our models. The Navy is both a producer and user of climatologies; it provides observations to the national archives and in turn employs data from these archives to establish data bases. Suggestions for future improvements to ocean climate data are offered.
Plasmon mass scale and quantum fluctuations of classical fields on a real time lattice
NASA Astrophysics Data System (ADS)
Kurkela, Aleksi; Lappi, Tuomas; Peuron, Jarkko
2018-03-01
Classical real-time lattice simulations play an important role in understanding non-equilibrium phenomena in gauge theories and are used in particular to model the prethermal evolution of heavy-ion collisions. Above the Debye scale the classical Yang-Mills (CYM) theory can be matched smoothly to kinetic theory. First we study the limits of the quasiparticle picture of the CYM fields by determining the plasmon mass of the system using 3 different methods. Then we argue that one needs a numerical calculation of a system of classical gauge fields and small linearized fluctuations, which correspond to quantum fluctuations, in a way that keeps the separation between the two manifest. We demonstrate and test an implementation of an algorithm with the linearized fluctuation showing that the linearization indeed works and that the Gauss's law is conserved.
Optimal output fast feedback in two-time scale control of flexible arms
NASA Technical Reports Server (NTRS)
Siciliano, B.; Calise, A. J.; Jonnalagadda, V. R. P.
1986-01-01
Control of lightweight flexible arms moving along predefined paths can be successfully synthesized on the basis of a two-time scale approach. A model following control can be designed for the reduced order slow subsystem. The fast subsystem is a linear system in which the slow variables act as parameters. The flexible fast variables which model the deflections of the arm along the trajectory can be sensed through strain gage measurements. For full state feedback design the derivatives of the deflections need to be estimated. The main contribution of this work is the design of an output feedback controller which includes a fixed order dynamic compensator, based on a recent convergent numerical algorithm for calculating LQ optimal gains. The design procedure is tested by means of simulation results for the one link flexible arm prototype in the laboratory.
NASA Astrophysics Data System (ADS)
Vanclooster, Marnik
2010-05-01
The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.
Tiwari, Nishidha; Tiwari, Shilpi; Thakur, Ruchi; Agrawal, Nikita; Shashikiran, N D; Singla, Shilpy
2015-01-01
Dental treatment is usually a poignant phenomenon for children. Projective scales are preferred over psychometric scales to recognize it, and to obtain the self-report from children. The aims were to evaluate treatment related fear using a newly developed fear scale for children, fear assessment picture scale (FAPS), and anxiety with colored version of modified facial affective scale (MFAS) - three faces along with physiologic responses (pulse rate and oxygen saturation) obtained by pulse oximeter before and during pulpectomy procedure. Total, 60 children of age 6-8 years who were visiting the dental hospital for the first time and needed pulpectomy treatment were selected. Children selected were of sound physical, physiological, and mental condition. Two projective scales were used; one to assess fear - FAPS and to assess anxiety - colored version of MFAS - three faces. These were co-related with the physiological responses (oxygen saturation and pulse rate) of children obtained by pulse oximeter before and during the pulpectomy procedure. Shapiro-Wilk test, McNemar's test, Wilcoxon signed ranks test, Kruskal-Wallis test, Mann-Whitney test were applied in the study. The physiological responses showed association with FAPS and MFAS though not significant. However, oxygen saturation with MFAS showed a significant change between "no anxiety" and "some anxiety" as quantified by Kruskal-Wallis test value 6.287, P = 0.043 (<0.05) before pulpectomy procedure. The FAPS can prove to be a pragmatic tool in spotting the fear among young children. This test is easy and fast to apply on children and reduces the chair-side time.
Van Meter, Kimberly J.; Basu, Nandita B.
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures. PMID:25985290
Van Meter, Kimberly J; Basu, Nandita B
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
Medical physics staffing for radiation oncology: a decade of experience in Ontario, Canada
Battista, Jerry J.; Patterson, Michael S.; Beaulieu, Luc; Sharpe, Michael B.; Schreiner, L. John; MacPherson, Miller S.; Van Dyk, Jacob
2012-01-01
The January 2010 articles in The New York Times generated intense focus on patient safety in radiation treatment, with physics staffing identified frequently as a critical factor for consistent quality assurance. The purpose of this work is to review our experience with medical physics staffing, and to propose a transparent and flexible staffing algorithm for general use. Guided by documented times required per routine procedure, we have developed a robust algorithm to estimate physics staffing needs according to center‐specific workload for medical physicists and associated support staff, in a manner we believe is adaptable to an evolving radiotherapy practice. We calculate requirements for each staffing type based on caseload, equipment inventory, quality assurance, educational programs, and administration. Average per‐case staffing ratios were also determined for larger‐scale human resource planning and used to model staffing needs for Ontario, Canada over the next 10 years. The workload specific algorithm was tested through a survey of Canadian cancer centers. For center‐specific human resource planning, we propose a grid of coefficients addressing specific workload factors for each staff group. For larger scale forecasting of human resource requirements, values of 260, 700, 300, 600, 1200, and 2000 treated cases per full‐time equivalent (FTE) were determined for medical physicists, physics assistants, dosimetrists, electronics technologists, mechanical technologists, and information technology specialists, respectively. PACS numbers: 87.55.N‐, 87.55.Qr PMID:22231223
Medical physics staffing for radiation oncology: a decade of experience in Ontario, Canada.
Battista, Jerry J; Clark, Brenda G; Patterson, Michael S; Beaulieu, Luc; Sharpe, Michael B; Schreiner, L John; MacPherson, Miller S; Van Dyk, Jacob
2012-01-05
The January 2010 articles in The New York Times generated intense focus on patient safety in radiation treatment, with physics staffing identified frequently as a critical factor for consistent quality assurance. The purpose of this work is to review our experience with medical physics staffing, and to propose a transparent and flexible staffing algorithm for general use. Guided by documented times required per routine procedure, we have developed a robust algorithm to estimate physics staffing needs according to center-specific workload for medical physicists and associated support staff, in a manner we believe is adaptable to an evolving radiotherapy practice. We calculate requirements for each staffing type based on caseload, equipment inventory, quality assurance, educational programs, and administration. Average per-case staffing ratios were also determined for larger-scale human resource planning and used to model staffing needs for Ontario, Canada over the next 10 years. The workload specific algorithm was tested through a survey of Canadian cancer centers. For center-specific human resource planning, we propose a grid of coefficients addressing specific workload factors for each staff group. For larger scale forecasting of human resource requirements, values of 260, 700, 300, 600, 1200, and 2000 treated cases per full-time equivalent (FTE) were determined for medical physicists, physics assistants, dosimetrists, electronics technologists, mechanical technologists, and information technology specialists, respectively.
SCOPE : Future Formation-Flying Magnetospheric Satellite Mission
NASA Astrophysics Data System (ADS)
Saito, Yoshifumi
A formation flight satellite mission "SCOPE" is now under study aiming at launching in 2017. "SCOPE" stands for ‘cross Scale COupling in the Plasma universE'. The main purpose of this mission is to investigate the dynamic behaviors of plasma in the terrestrial magnetosphere that range over magnitudes of both temporal and spatial scales. The basic idea of the SCOPE mission is to distinguish temporal and spatial variations of physical processes by putting five formation flight spacecraft into the key regions of the Earth's magnetosphere. The formation consists of one large mother satellite and four small daughter satellites. Three of the four daughter satellites surround the mother satellite 3-dimensionally maintaining the mutual distances of variable ranges between 5 km and 5000 km. The fourth daughter satellite stays near the mother satellite with the distance between 5 km and 100 km. By this configuration, we can obtain both the macro-scale (1000 km - 5000 km) and micro-scale (¡ 100 km) information about the plasma disturbances at the same time. The launcher for SCOPE has been assumed to be M-V rocket (or its succession rocket) of JAXA. However, due to the termination of M-V rocket, we are now considering to use HIIA. The orbits of SCOPE satellites are all highly elliptical with its apogee 30Re from the Earth center. The inter-satellite link is used for telemetry/command operation as well as ranging to determine the relative orbits of the 5 satellites in small distances. The SCOPE mission is designed such that observational studies from the new perspective, the crossscale coupling, should be conducted. The orbit of the formation flight are designed such that the spacecraft will visit most of the key regions in the magnetosphere, including the bow shock, the magnetospheric boundary, the inner-magnetosphere, and the near-Earth magnetotail. The key issues for the realization of this mission are: (1) The need for high temporal resolution of electron measurements and quantitative wave field measurements at electron scales; (2) The need for full coverage over the energy range of interests with mass spectroscopy; (3) The need for coordinated space plasma observations by intercommunicated formation flying satellites; and (4) The need to resolve more than one-scale simultaneously. In order to cover the multiple (more than two) scales simultaneously, SCOPE and esa's Cross-Scale have started detailed discussion for the future collaboration. By this collaboration, SCOPE can reduce the number of the daughter satellites that can stay within 100 km throughout the mission life.
Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi
2014-08-15
We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.
Scaling in Ecosystems and the Linkage of Macroecological Laws
NASA Astrophysics Data System (ADS)
Rinaldo, A.
2007-12-01
Are there predictable linkages among macroecological laws regulating size and abundance of organisms that are ubiquitously supported by empirical observations and that ecologists treat traditionally as independent? Do fragmentation of habitats, or reduced supply of energy and matter, result in predictable changes on whole ecosystems as a function of their size? Using a coherent theoretical framework based on scaling theory, it is argued that the answer to both these questions is affirmative. The concern of the talk is with the comparatively simple situation of the steady state behavior of a fully developed ecosystem in which, over evolutionary time, resources are exploited in full, individual and collective metabolic needs are met and enough time has elapsed to produce a rough balance between speciation and extinction and ecological fluxes. While ecological patterns and processes often show great variation when viewed at different scales of space, time, organismic size and organizational complexity, there is also widespread evidence for the existence of scaling regularities as embedded in macroecological "laws" or rules. These laws have commanded considerable attention from the ecological community. Indeed they are central to ecological theory as they describe the features of complex adaptive systems shown by a number of biological systems, and perhaps for the investigation of the dynamic origin of scale invariance of natural forms in general. The species-area and relative species-abundance relations, the scaling of community and species' size spectra, the scaling of population densities with their mean body mass and the scaling of the largest organism with ecosystem size are examples of such laws. Borrowing heavily from earlier successes in physics, it will be shown how simple mathematical scaling arguments, following from dimensional and finite-size scaling analyses, provide theoretical predictions of the inter- relationships among the species abundance relationship, the species-area relationship and community size spectra, in excellent accord with empirical data. The main conclusion is that the proposed scaling framework, along with the questions and predictions it provides, serves as a starting point for a novel approach to macroecological analysis.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Liu, Chong
2016-10-01
The common solution for a field programmable gate array (FPGA)-based time-to-digital converter (TDC) is constructing a tapped delay line (TDL) for time interpolation to yield a sub-clock time resolution. The granularity and uniformity of the delay elements of TDL determine the TDC time resolution. In this paper, we propose a dual-sampling TDL architecture and a bin decimation method that could make the delay elements as small and uniform as possible, so that the implemented TDCs can achieve a high time resolution beyond the intrinsic cell delay. Two identical full hardware-based TDCs were implemented in a Xilinx UltraScale FPGA for performance evaluation. For fixed time intervals in the range from 0 to 440 ns, the average time-interval RMS resolution is measured by the two TDCs with 4.2 ps, thus the timestamp resolution of single TDC is derived as 2.97 ps. The maximum hit rate of the TDC is as high as half the system clock rate of FPGA, namely 250 MHz in our demo prototype. Because the conventional online bin-by-bin calibration is not needed, the implementation of the proposed TDC is straightforward and relatively resource-saving.
NASA Astrophysics Data System (ADS)
To, Albert C.; Liu, Wing Kam; Olson, Gregory B.; Belytschko, Ted; Chen, Wei; Shephard, Mark S.; Chung, Yip-Wah; Ghanem, Roger; Voorhees, Peter W.; Seidman, David N.; Wolverton, Chris; Chen, J. S.; Moran, Brian; Freeman, Arthur J.; Tian, Rong; Luo, Xiaojuan; Lautenschlager, Eric; Challoner, A. Dorian
2008-09-01
Microsystems have become an integral part of our lives and can be found in homeland security, medical science, aerospace applications and beyond. Many critical microsystem applications are in harsh environments, in which long-term reliability needs to be guaranteed and repair is not feasible. For example, gyroscope microsystems on satellites need to function for over 20 years under severe radiation, thermal cycling, and shock loading. Hence a predictive-science-based, verified and validated computational models and algorithms to predict the performance and materials integrity of microsystems in these situations is needed. Confidence in these predictions is improved by quantifying uncertainties and approximation errors. With no full system testing and limited sub-system testings, petascale computing is certainly necessary to span both time and space scales and to reduce the uncertainty in the prediction of long-term reliability. This paper presents the necessary steps to develop predictive-science-based multiscale modeling and simulation system. The development of this system will be focused on the prediction of the long-term performance of a gyroscope microsystem. The environmental effects to be considered include radiation, thermo-mechanical cycling and shock. Since there will be many material performance issues, attention is restricted to creep resulting from thermal aging and radiation-enhanced mass diffusion, material instability due to radiation and thermo-mechanical cycling and damage and fracture due to shock. To meet these challenges, we aim to develop an integrated multiscale software analysis system that spans the length scales from the atomistic scale to the scale of the device. The proposed software system will include molecular mechanics, phase field evolution, micromechanics and continuum mechanics software, and the state-of-the-art model identification strategies where atomistic properties are calibrated by quantum calculations. We aim to predict the long-term (in excess of 20 years) integrity of the resonator, electrode base, multilayer metallic bonding pads, and vacuum seals in a prescribed mission. Although multiscale simulations are efficient in the sense that they focus the most computationally intensive models and methods on only the portions of the space time domain needed, the execution of the multiscale simulations associated with evaluating materials and device integrity for aerospace microsystems will require the application of petascale computing. A component-based software strategy will be used in the development of our massively parallel multiscale simulation system. This approach will allow us to take full advantage of existing single scale modeling components. An extensive, pervasive thrust in the software system development is verification, validation, and uncertainty quantification (UQ). Each component and the integrated software system need to be carefully verified. An UQ methodology that determines the quality of predictive information available from experimental measurements and packages the information in a form suitable for UQ at various scales needs to be developed. Experiments to validate the model at the nanoscale, microscale, and macroscale are proposed. The development of a petascale predictive-science-based multiscale modeling and simulation system will advance the field of predictive multiscale science so that it can be used to reliably analyze problems of unprecedented complexity, where limited testing resources can be adequately replaced by petascale computational power, advanced verification, validation, and UQ methodologies.
Braun, Tobias; Grüneberg, Christian; Thiel, Christian
2018-04-01
Routine screening for frailty could be used to timely identify older people with increased vulnerability und corresponding medical needs. The aim of this study was the translation and cross-cultural adaptation of the PRISMA-7 questionnaire, the FRAIL scale and the Groningen Frailty Indicator (GFI) into the German language as well as a preliminary analysis of the diagnostic test accuracy of these instruments used to screen for frailty. A diagnostic cross-sectional study was performed. The instrument translation into German followed a standardized process. Prefinal versions were clinically tested on older adults who gave structured in-depth feedback on the scales in order to compile a final revision of the German language scale versions. For the analysis of diagnostic test accuracy (criterion validity), PRISMA-7, FRAIL scale and GFI were considered the index tests. Two reference tests were applied to assess frailty, either based on Fried's model of a Physical Frailty Phenotype or on the model of deficit accumulation, expressed in a Frailty Index. Prefinal versions of the German translations of each instrument were produced and completed by 52 older participants (mean age: 73 ± 6 years). Some minor issues concerning comprehensibility and semantics of the scales were identified and resolved. Using the Physical Frailty Phenotype (frailty prevalence: 4%) criteria as a reference standard, the accuracy of the instruments was excellent (area under the curve AUC >0.90). Taking the Frailty Index (frailty prevalence: 23%) as the reference standard, the accuracy was good (AUC between 0.73 and 0.88). German language versions of PRISMA-7, FRAIL scale and GFI have been established and preliminary results indicate sufficient diagnostic test accuracy that needs to be further established.
Liang, Liang; Schwartz, Mark D
2014-10-01
Variation in the timing of plant phenology caused by phenotypic plasticity is a sensitive measure of how organisms respond to weather and climate variability. Although continental-scale gradients in climate and consequential patterns in plant phenology are well recognized, the contribution of underlying genotypic difference to the geography of phenology is less well understood. We hypothesize that different temperate plant genotypes require varying amount of heat energy for resuming annual growth and reproduction as a result of adaptation and other ecological and evolutionary processes along climatic gradients. In particular, at least for some species, the growing degree days (GDD) needed to trigger the same spring phenology events (e.g., budburst and flower bloom) may be less for individuals originated from colder climates than those from warmer climates. This variable intrinsic heat energy requirement in plants can be characterized by the term growth efficiency and is quantitatively reflected in the timing of phenophases-earlier timing indicates higher efficiency (i.e., less heat energy needed to trigger phenophase transitions) and vice versa compared to a standard reference (i.e., either a uniform climate or a uniform genotype). In this study, we tested our hypothesis by comparing variations of budburst and bloom timing of two widely documented plants from the USA National Phenology Network (i.e., red maple-Acer rubrum and forsythia-Forsythia spp.) with cloned indicator plants (lilac-Syringa x chinensis 'Red Rothomagensis') at multiple eastern US sites. Our results indicate that across the accumulated temperature gradient, the two non-clonal plants showed significantly more gradual changes than the cloned plants, manifested by earlier phenology in colder climates and later phenology in warmer climates relative to the baseline clone phenological response. This finding provides initial evidence supporting the growth efficiency hypothesis, and suggests more work is warranted. More studies investigating genotype-determined phenological variations will be useful for better understanding and prediction of the continental-scale patterns of biospheric responses to climate change.
NASA Astrophysics Data System (ADS)
Afeyan, Bedros
2013-10-01
We have recently introduced and extensively studied a new adaptive method of LPI control. It promises to extend the effectiveness of laser as inertial fusion drivers by allowing active control of stimulated Raman and Brillouin scattering and crossed beam energy transfer. It breaks multi-nanosecond pulses into a series of picosecond (ps) time scale spikes with comparable gaps in between. The height and width of each spike as well as their separations are optimization parameters. In addition, the spatial speckle patterns are changed after a number of successive spikes as needed (from every spike to never). The combination of these parameters allows the taming of parametric instabilities to conform to any desired reduced reflectivity profile, within the bounds of the performance limitations of the lasers. Instead of pulse shaping on hydrodynamical time scales, far faster (from 1 ps to 10 ps) modulations of the laser profile will be needed to implement the STUD pulse program for full LPI control. We will show theoretical and computational evidence for the effectiveness of the STUD pulse program to control LPI. The physics of why STUD pulses work and how optimization can be implemented efficiently using statistical nonlinear optical models and techniques will be explained. We will also discuss a novel diagnostic system employing STUD pulses that will allow the boosted measurement of velocity distribution function slopes on a ps time scale in the small crossing volume of a pump and a probe beam. Various regimes from weak to strong coupling and weak to strong damping will be treated. Novel pulse modulation schemes and diagnostic tools based on time-lenses used in both microscope and telescope modes will be suggested for the execution of the STUD pule program. Work Supported by the DOE NNSA-OFES Joint Program on HEDLP and DOE OFES SBIR Phase I Grants.
Water mass changes inferred by gravity field variations with GRACE
NASA Astrophysics Data System (ADS)
Fagiolini, Elisa; Gruber, Christian; Apel, Heiko; Viet Dung, Nguyen; Güntner, Andreas
2013-04-01
Since 2002 the Gravity Recovery And Climate Experiment (GRACE) mission has been measuring temporal variations of Earth's gravity field depicting with extreme accuracy how mass is distributed and varies around the globe. Advanced signal separation techniques enable to isolate different sources of mass such as atmospheric and oceanic circulation or land hydrology. Nowadays thanks to GRACE, floods, droughts, and water resources monitoring are possible on a global scale. At GFZ Potsdam scientists have been involved since 2000 in the initiation and launch of the GRACE precursor CHAMP satellite mission, since 2002 in the GRACE Science Data System and since 2009 in the frame of ESÁs GOCE High Processing Facility as well as projected GRACE FOLLOW-ON for the continuation of time variable gravity field determination. Recently GFZ has reprocessed the complete GRACE time-series of monthly gravity field spherical harmonic solutions with improved standards and background models. This new release (RL05) already shows significantly less noise and spurious artifacts. In order to monitor water mass re-distribution and fast moving water, we still need to reach a higher resolution in both time and space. Moreover, in view of disaster management applications we need to act with a shorter latency (current latency standard is 2 months). For this purpose, we developed a regional method based on radial base functions that is capable to compute models in regional and global representation. This new method localizes the gravity observation to the closest regions and omits spatial correlations with farther regions. Additionally, we succeeded to increase the temporal resolution to sub-monthly time scales. Innovative concepts such as Kalman filtering and regularization, along with sophisticated regional modeling have shifted temporal and spatial resolution towards new frontiers. We expect global hydrological models as WHGM to profit from such accurate outcomes. First results comparing the mass changes over the Mekong Delta observed with GRACE with spatial explicit hydraulic simulations of the large scale annual inundation volume during the flood season are presented and discussed.
Climate Local Information over the Mediterranean to Respond User Needs
NASA Astrophysics Data System (ADS)
Ruti, P.
2012-12-01
CLIM-RUN aims at developing a protocol for applying new methodologies and improved modeling and downscaling tools for the provision of adequate climate information at regional to local scale that is relevant to and usable by different sectors of society (policymakers, industry, cities, etc.). Differently from current approaches, CLIM-RUN will develop a bottom-up protocol directly involving stakeholders early in the process with the aim of identifying well defined needs at the regional to local scale. The improved modeling and downscaling tools will then be used to optimally respond to these specific needs. The protocol is assessed by application to relevant case studies involving interdependent sectors, primarily tourism and energy, and natural hazards (wild fires) for representative target areas (mountainous regions, coastal areas, islands). The region of interest for the project is the Greater Mediterranean area, which is particularly important for two reasons. First, the Mediterranean is a recognized climate change hot-spot, i.e. a region particularly sensitive and vulnerable to global warming. Second, while a number of countries in Central and Northern Europe have already in place well developed climate service networks (e.g. the United Kingdom and Germany), no such network is available in the Mediterranean. CLIM-RUN is thus also intended to provide the seed for the formation of a Mediterranean basin-side climate service network which would eventually converge into a pan-European network. The general time horizon of interest for the project is the future period 2010-2050, a time horizon that encompasses the contributions of both inter-decadal variability and greenhouse-forced climate change. In particular, this time horizon places CLIM-RUN within the context of a new emerging area of research, that of decadal prediction, which will provide a strong potential for novel research.
Scaling laws of strategic behavior and size heterogeneity in agent dynamics
NASA Astrophysics Data System (ADS)
Vaglica, Gabriella; Lillo, Fabrizio; Moro, Esteban; Mantegna, Rosario N.
2008-03-01
We consider the financial market as a model system and study empirically how agents strategically adjust the properties of large orders in order to meet their preference and minimize their impact. We quantify this strategic behavior by detecting scaling relations between the variables characterizing the trading activity of different institutions. We also observe power-law distributions in the investment time horizon, in the number of transactions needed to execute a large order, and in the traded value exchanged by large institutions, and we show that heterogeneity of agents is a key ingredient for the emergence of some aggregate properties characterizing this complex system.
NASA Astrophysics Data System (ADS)
De Domenico, Manlio
2018-03-01
Biological systems, from a cell to the human brain, are inherently complex. A powerful representation of such systems, described by an intricate web of relationships across multiple scales, is provided by complex networks. Recently, several studies are highlighting how simple networks - obtained by aggregating or neglecting temporal or categorical description of biological data - are not able to account for the richness of information characterizing biological systems. More complex models, namely multilayer networks, are needed to account for interdependencies, often varying across time, of biological interacting units within a cell, a tissue or parts of an organism.
D'Orazio, Alessia; Dragonetti, Antonella; Finiguerra, Ivana; Simone, Paola
2015-01-01
The measurement of nursing workload in a sub-intensive unit with the Nine Equivalents of Nursing Manpower Scale. The need to maximize the nursing manpower to patients complexity requires a careful assessment of the nursing workload. To measure the nursing workload in a sub-intensive care unit and to assess the impact of patients isolated for multidrug resistant microorganisms (MDR) and with delirium, on the nursing workload. From december 1 2014 to march 31 2015 the nursing workload of patients admitted to a semi intensive untit of a Turin Hospital was measured with Nine Equivalents of Nursing Manpower (NEMS) original and modified, adding 1 point score for patients isolated and with delirium (Richmond Agitation Sedation Scale). Admission and discharge times, and the activities performed in and out of the unit were registered. Two-hundred-thirty patients were daily assessed and no differences were observed in mean NEMS scores with the original and modified scale: december 17.3 vs 18.5; January 19.4 vs 20.2; February 19.9 vs 20.6; March 19.5 vs 20.1). mean scores did not change across shifts although on average 8 days a month the scores exceeded 21, identifiyng an excess workload and a need of a 2:1 patient/nurse ratio. The maximum workload was concentrated between 12.00 and 18.00 pm. The NEMS scale allows to measure the nursing workload. Apparently patients isolated and with delirium did not significantly impact on the nursing workload.
Richter, Linda M; Daelmans, Bernadette; Lombardi, Joan; Heymann, Jody; Boo, Florencia Lopez; Behrman, Jere R; Lu, Chunling; Lucas, Jane E; Perez-Escamilla, Rafael; Dua, Tarun; Bhutta, Zulfiqar A; Stenberg, Karin; Gertler, Paul; Darmstadt, Gary L
2018-01-01
Building on long-term benefits of early intervention (Paper 2 of this Series) and increasing commitment to early childhood development (Paper 1 of this Series), scaled up support for the youngest children is essential to improving health, human capital, and wellbeing across the life course. In this third paper, new analyses show that the burden of poor development is higher than estimated, taking into account additional risk factors. National programmes are needed. Greater political prioritisation is core to scale-up, as are policies that afford families time and financial resources to provide nurturing care for young children. Effective and feasible programmes to support early child development are now available. All sectors, particularly education, and social and child protection, must play a role to meet the holistic needs of young children. However, health provides a critical starting point for scaling up, given its reach to pregnant women, families, and young children. Starting at conception, interventions to promote nurturing care can feasibly build on existing health and nutrition services at limited additional cost. Failure to scale up has severe personal and social consequences. Children at elevated risk for compromised development due to stunting and poverty are likely to forgo about a quarter of average adult income per year, and the cost of inaction to gross domestic product can be double what some countries currently spend on health. Services and interventions to support early childhood development are essential to realising the vision of the Sustainable Development Goals. PMID:27717610
An S N Algorithm for Modern Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Randal Scott
2016-08-29
LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures, where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, we describe a modified KBA algorithm that allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.
Environmental stochasticity controls soil erosion variability
Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone
2016-01-01
Understanding soil erosion by water is essential for a range of research areas but the predictive skill of prognostic models has been repeatedly questioned because of scale limitations of empirical data and the high variability of soil loss across space and time scales. Improved understanding of the underlying processes and their interactions are needed to infer scaling properties of soil loss and better inform predictive methods. This study uses data from multiple environments to highlight temporal-scale dependency of soil loss: erosion variability decreases at larger scales but the reduction rate varies with environment. The reduction of variability of the geomorphic response is attributed to a ‘compensation effect’: temporal alternation of events that exhibit either source-limited or transport-limited regimes. The rate of reduction is related to environment stochasticity and a novel index is derived to reflect the level of variability of intra- and inter-event hydrometeorologic conditions. A higher stochasticity index implies a larger reduction of soil loss variability (enhanced predictability at the aggregated temporal scales) with respect to the mean hydrologic forcing, offering a promising indicator for estimating the degree of uncertainty of erosion assessments. PMID:26925542
Stoicea, Nicoleta; Baddigam, Ramya; Wajahn, Jennifer; Sipes, Angela C; Arias-Morales, Carlos E; Gastaldo, Nicholas; Bergese, Sergio D
2016-01-01
The elderly population in the United States is increasing exponentially in tandem with risk for frailty. Frailty is described by a clinically significant state where a patient is at risk for developing complications requiring increased assistance in daily activities. Frailty syndrome studied in geriatric patients is responsible for an increased risk for falls, and increased mortality. In efforts to prepare for and to intervene in perioperative complications and general frailty, a universal scale to measure frailty is necessary. Many methods for determining frailty have been developed, yet there remains a need to define clinical frailty and, therefore, the most effective way to measure it. This article reviews six popular scales for measuring frailty and evaluates their clinical effectiveness demonstrated in previous studies. By identifying the most time-efficient, criteria comprehensive, and clinically effective scale, a universal scale can be implemented into standard of care and reduce complications from frailty in both non-surgical and surgical settings, especially applied to the perioperative surgical home model. We suggest further evaluation of the Edmonton Frailty Scale for inclusion in patient care.
Stoicea, Nicoleta; Baddigam, Ramya; Wajahn, Jennifer; Sipes, Angela C.; Arias-Morales, Carlos E.; Gastaldo, Nicholas; Bergese, Sergio D.
2016-01-01
The elderly population in the United States is increasing exponentially in tandem with risk for frailty. Frailty is described by a clinically significant state where a patient is at risk for developing complications requiring increased assistance in daily activities. Frailty syndrome studied in geriatric patients is responsible for an increased risk for falls, and increased mortality. In efforts to prepare for and to intervene in perioperative complications and general frailty, a universal scale to measure frailty is necessary. Many methods for determining frailty have been developed, yet there remains a need to define clinical frailty and, therefore, the most effective way to measure it. This article reviews six popular scales for measuring frailty and evaluates their clinical effectiveness demonstrated in previous studies. By identifying the most time-efficient, criteria comprehensive, and clinically effective scale, a universal scale can be implemented into standard of care and reduce complications from frailty in both non-surgical and surgical settings, especially applied to the perioperative surgical home model. We suggest further evaluation of the Edmonton Frailty Scale for inclusion in patient care. PMID:27493935
Menke, Hannah P.; Andrew, Matthew G.; Vila-Comamala, Joan; Rau, Christoph; Blunt, Martin J.; Bijeljic, Branko
2017-01-01
Underground storage permanence is a major concern for carbon capture and storage. Pumping CO2 into carbonate reservoirs has the potential to dissolve geologic seals and allow CO2 to escape. However, the dissolution processes at reservoir conditions are poorly understood. Thus, time-resolved experiments are needed to observe and predict the nature and rate of dissolution at the pore scale. Synchrotron fast tomography is a method of taking high-resolution time-resolved images of complex pore structures much more quickly than traditional µ-CT. The Diamond Lightsource Pink Beam was used to dynamically image dissolution of limestone in the presence of CO2-saturated brine at reservoir conditions. 100 scans were taken at a 6.1 µm resolution over a period of 2 hours. The images were segmented and the porosity and permeability were measured using image analysis and network extraction. Porosity increased uniformly along the length of the sample; however, the rate of increase of both porosity and permeability slowed at later times. PMID:28287529
Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Kurtz, Nolan Scot
2014-09-01
The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance aremore » investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.« less
A fragmentation model of earthquake-like behavior in internet access activity
NASA Astrophysics Data System (ADS)
Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.
We present a fragmentation model that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with earthquake-like behavior. We apply the model to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the earthquake data except that two power-law regimes are observed. Using the proposed model, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The model is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.
Menke, Hannah P; Andrew, Matthew G; Vila-Comamala, Joan; Rau, Christoph; Blunt, Martin J; Bijeljic, Branko
2017-02-21
Underground storage permanence is a major concern for carbon capture and storage. Pumping CO2 into carbonate reservoirs has the potential to dissolve geologic seals and allow CO2 to escape. However, the dissolution processes at reservoir conditions are poorly understood. Thus, time-resolved experiments are needed to observe and predict the nature and rate of dissolution at the pore scale. Synchrotron fast tomography is a method of taking high-resolution time-resolved images of complex pore structures much more quickly than traditional µ-CT. The Diamond Lightsource Pink Beam was used to dynamically image dissolution of limestone in the presence of CO2-saturated brine at reservoir conditions. 100 scans were taken at a 6.1 µm resolution over a period of 2 hours. The images were segmented and the porosity and permeability were measured using image analysis and network extraction. Porosity increased uniformly along the length of the sample; however, the rate of increase of both porosity and permeability slowed at later times.
Horst, Fabian; Eekhoff, Alexander; Newell, Karl M; Schöllhorn, Wolfgang I
2017-01-01
Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours). Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins). For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns. Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales. Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the context of these findings, especially towards more individualized and situational diagnoses and therapy.
Andelic, Nada; Soberg, Helene L; Berntsen, Svein; Sigurdardottir, Solrun; Roe, Cecilie
2014-11-01
To describe the self-perceived health care needs of patients with moderate-to-severe traumatic brain injury (TBI) and to assess the impact of the functional level at 1 year after injury on patients' unmet needs at the 5-year follow-up. A prospective follow-up study. Clinical research. A total of 93 patients participated in the 5-year follow-up. We registered demographic and injury-related data at the time of admission and the scores for the Disability Rating Scale, Glasgow Outcome Scale-Extended, and Short Form 36 subscales for physical functioning and mental health at 1 and 5 years. The patients' self-perceived health care needs and use of health care services at 5 years were the main outcome measurements. At the 5-year follow-up, 70% of patients reported at least 1 perceived need. The self-perceived health care needs were met for 39% of the patients. The patients with unmet needs (n = 29 [31%]) reported frequent needs in emotional (65%), vocational (62%), and cognitive (58%) domains. These patients were significantly more likely to present a less severe disability on the Disability Rating Scale at the 1-year follow-up (odds ratio [OR] 0.11 [95% confidence interval {CI}, 0.02-0.7]; P = .02). Worse mental health at the 1-year follow-up and a younger age (16-29 years) largely predicted unmet needs at the 5-year follow-up (OR 3.28 [95% CI, 1.1-10.04], P = .04; and OR 4.93 [95% CI, 0.16-15.2], P = .005, respectively). Gaps between self-perceived health care needs and health care services received at the 5-year follow-up were found. An important message to clinicians who provide health care services in the late TBI phase is that they should be aware of patients' long-term needs regarding cognitive and emotional difficulties. Of equal importance is an emphasis on long-term vocational rehabilitation services. To ensure the appropriateness of health care service delivery, health care services after TBI should be better targeted at less-severe TBI population as well. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
A perspective on sustained marine observations for climate modelling and prediction.
Dunstone, Nick J
2014-09-28
Here, I examine some of the many varied ways in which sustained global ocean observations are used in numerical modelling activities. In particular, I focus on the use of ocean observations to initialize predictions in ocean and climate models. Examples are also shown of how models can be used to assess the impact of both current ocean observations and to simulate that of potential new ocean observing platforms. The ocean has never been better observed than it is today and similarly ocean models have never been as capable at representing the real ocean as they are now. However, there remain important unanswered questions that can likely only be addressed via future improvements in ocean observations. In particular, ocean observing systems need to respond to the needs of the burgeoning field of near-term climate predictions. Although new ocean observing platforms promise exciting new discoveries, there is a delicate balance to be made between their funding and that of the current ocean observing system. Here, I identify the need to secure long-term funding for ocean observing platforms as they mature, from a mainly research exercise to an operational system for sustained observation over climate change time scales. At the same time, considerable progress continues to be made via ship-based observing campaigns and I highlight some that are dedicated to addressing uncertainties in key ocean model parametrizations. The use of ocean observations to understand the prominent long time scale changes observed in the North Atlantic is another focus of this paper. The exciting first decade of monitoring of the Atlantic meridional overturning circulation by the RAPID-MOCHA array is highlighted. The use of ocean and climate models as tools to further probe the drivers of variability seen in such time series is another exciting development. I also discuss the need for a concerted combined effort from climate models and ocean observations in order to understand the current slow-down in surface global warming. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
USDA-ARS?s Scientific Manuscript database
There is a need to develop scale explicit understanding of erosion to overcome existing conceptual and methodological flaws in our modelling methods currently applied to understand the process of erosion, transport and deposition at the catchment scale. These models need to be based on a sound under...
ERIC Educational Resources Information Center
Truijen, K. J. P.; Sleegers, P. J. C.; Meelissen, M. R. M.; Nieuwenhuis, A. F. M.
2013-01-01
Purpose: At a time when secondary vocational education is implementing competence-based education (CBE) on a large scale, to adapt to the needs of students and of the labour market in a modern society, many vocational schools have recognised that interdisciplinary teacher teams are an important condition for this implementation. In order to…
Aircraft and Engine Development Testing
1986-09-01
Control in Flight * Integrated Inlet- engine * Power/weight Exceeds Unity F-lll * Advanced Engines * Augmented Turbofan * High Turbine Temperature...residence times). Also, fabrication of a small scale "hot" engine with rotating components such as compressors and turbines with cooled blades , is...capabil- ities are essential to meet the needs of current and projected aircraft and engine programs. The required free jet nozzles should be capable of
ERIC Educational Resources Information Center
Singer, Judith D.; Willett, John B.
The National Center for Education Statistics (NCES) is exploring the possibility of conducting a large-scale multi-year study of teachers' careers. The proposed new study is intended to follow a national probability sample of teachers over an extended period of time. A number of methodological issues need to be addressed before the study can be…
Ten Minutes Wide: Human Walking Capacities and the Experiential Quality of Campus Design
ERIC Educational Resources Information Center
Spooner, David
2011-01-01
Whether a campus is large or small, the idea of a 10-minute walk is an important human-scaled design standard that affects an institution in significant ways beyond just getting students to class on time. Designing a 10-minute walk seems like a simple exercise. Based on earlier information, all one needs to do is provide a walking surface and make…
McCannon, Melinda; O'Neal, Pamela V
2003-08-01
A national survey was conducted to determine the information technology skills nurse administrators consider critical for new nurses entering the work force. The sample consisted of 2,000 randomly selected members of the American Organization of Nurse Executives. Seven hundred fifty-two usable questionnaires were returned, for a response rate of 38%. The questionnaire used a 5-point Likert scale and consisted of 17 items that assessed various technology skills and demographic information. The questionnaire was developed and pilot tested with content experts to establish content validity. Descriptive analysis of the data revealed that using e-mail effectively, operating basic Windows applications, and searching databases were critical information technology skills. The most critical information technology skill involved knowing nursing-specific software, such as bedside charting and computer-activated medication dispensers. To effectively prepare nursing students with technology skills needed at the time of entry into practice, nursing faculty need to incorporate information technology skills into undergraduate nursing curricula.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-08-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-09-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called
Solar EUV irradiance for space weather applications
NASA Astrophysics Data System (ADS)
Viereck, R. A.
2015-12-01
Solar EUV irradiance is an important driver of space weather models. Large changes in EUV and x-ray irradiances create large variability in the ionosphere and thermosphere. Proxies such as the F10.7 cm radio flux, have provided reasonable estimates of the EUV flux but as the space weather models become more accurate and the demands of the customers become more stringent, proxies are no longer adequate. Furthermore, proxies are often provided only on a daily basis and shorter time scales are becoming important. Also, there is a growing need for multi-day forecasts of solar EUV irradiance to drive space weather forecast models. In this presentation we will describe the needs and requirements for solar EUV irradiance information from the space weather modeler's perspective. We will then translate these requirements into solar observational requirements such as spectral resolution and irradiance accuracy. We will also describe the activities at NOAA to provide long-term solar EUV irradiance observations and derived products that are needed for real-time space weather modeling.
GOBLET: The Global Organisation for Bioinformatics Learning, Education and Training
Atwood, Teresa K.; Bongcam-Rudloff, Erik; Brazas, Michelle E.; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M.; Schneider, Maria Victoria; van Gelder, Celia W. G.
2015-01-01
In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy—paradoxically, many are actually closing “niche” bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all. PMID:25856076
Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fingersh, Lee J; Loth, Eric; Kaminski, Meghan
2017-06-09
A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3more » wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.« less
Towards Remotely Sensed Composite Global Drought Risk Modelling
NASA Astrophysics Data System (ADS)
Dercas, Nicholas; Dalezios, Nicolas
2015-04-01
Drought is a multi-faceted issue and requires a multi-faceted assessment. Droughts may have the origin on precipitation deficits, which sequentially and by considering different time and space scales may impact soil moisture, plant wilting, stream flow, wildfire, ground water levels, famine and social impacts. There is a need to monitor drought even at a global scale. Key variables for monitoring drought include climate data, soil moisture, stream flow, ground water, reservoir and lake levels, snow pack, short-medium-long range forecasts, vegetation health and fire danger. However, there is no single definition of drought and there are different drought indicators and indices even for each drought type. There are already four operational global drought risk monitoring systems, namely the U.S. Drought Monitor, the European Drought Observatory (EDO), the African and the Australian systems, respectively. These systems require further research to improve the level of accuracy, the time and space scales, to consider all types of drought and to achieve operational efficiency, eventually. This paper attempts to contribute to the above mentioned objectives. Based on a similar general methodology, the multi-indicator approach is considered. This has resulted from previous research in the Mediterranean region, an agriculturally vulnerable region, using several drought indices separately, namely RDI and VHI. The proposed scheme attempts to consider different space scaling based on agroclimatic zoning through remotely sensed techniques and several indices. Needless to say, the agroclimatic potential of agricultural areas has to be assessed in order to achieve sustainable and efficient use of natural resources in combination with production maximization. Similarly, the time scale is also considered by addressing drought-related impacts affected by precipitation deficits on time scales ranging from a few days to a few months, such as non-irrigated agriculture, topsoil moisture, wildfire danger, range and pasture conditions and unregulated stream flows. Keywords Remote sensing; Composite Drought Indicators; Global Drought Risk Monitoring.
Zarzycki, Piotr; Rosso, Kevin M
2009-06-16
Replica kinetic Monte Carlo simulations were used to study the characteristic time scales of potentiometric titration of the metal oxides and (oxy)hydroxides. The effect of surface heterogeneity and surface transformation on the titration kinetics were also examined. Two characteristic relaxation times are often observed experimentally, with the trailing slower part attributed to surface nonuniformity, porosity, polymerization, amorphization, and other dynamic surface processes induced by unbalanced surface charge. However, our simulations show that these two characteristic relaxation times are intrinsic to the proton-binding reaction for energetically homogeneous surfaces, and therefore surface heterogeneity or transformation does not necessarily need to be invoked. However, all such second-order surface processes are found to intensify the separation and distinction of the two kinetic regimes. The effect of surface energetic-topographic nonuniformity, as well dynamic surface transformation, interface roughening/smoothing were described in a statistical fashion. Furthermore, our simulations show that a shift in the point-of-zero charge is expected from increased titration speed, and the pH-dependence of the titration measurement error is in excellent agreement with experimental studies.
Time-Domain Evaluation of Fractional Order Controllers’ Direct Discretization Methods
NASA Astrophysics Data System (ADS)
Ma, Chengbin; Hori, Yoichi
Fractional Order Control (FOC), in which the controlled systems and/or controllers are described by fractional order differential equations, has been applied to various control problems. Though it is not difficult to understand FOC’s theoretical superiority, realization issue keeps being somewhat problematic. Since the fractional order systems have an infinite dimension, proper approximation by finite difference equation is needed to realize the designed fractional order controllers. In this paper, the existing direct discretization methods are evaluated by their convergences and time-domain comparison with the baseline case. Proposed sampling time scaling property is used to calculate the baseline case with full memory length. This novel discretization method is based on the classical trapezoidal rule but with scaled sampling time. Comparative studies show good performance and simple algorithm make the Short Memory Principle method most practically superior. The FOC research is still at its primary stage. But its applications in modeling and robustness against non-linearities reveal the promising aspects. Parallel to the development of FOC theories, applying FOC to various control problems is also crucially important and one of top priority issues.
Integrated Modeling of Time Evolving 3D Kinetic MHD Equilibria and NTV Torque
NASA Astrophysics Data System (ADS)
Logan, N. C.; Park, J.-K.; Grierson, B. A.; Haskey, S. R.; Nazikian, R.; Cui, L.; Smith, S. P.; Meneghini, O.
2016-10-01
New analysis tools and integrated modeling of plasma dynamics developed in the OMFIT framework are used to study kinetic MHD equilibria evolution on the transport time scale. The experimentally observed profile dynamics following the application of 3D error fields are described using a new OMFITprofiles workflow that directly addresses the need for rapid and comprehensive analysis of dynamic equilibria for next-step theory validation. The workflow treats all diagnostic data as fundamentally time dependent, provides physics-based manipulations such as ELM phase data selection, and is consistent across multiple machines - including DIII-D and NSTX-U. The seamless integration of tokamak data and simulation is demonstrated by using the self-consistent kinetic EFIT equilibria and profiles as input into 2D particle, momentum and energy transport calculations using TRANSP as well as 3D kinetic MHD equilibrium stability and neoclassical transport modeling using General Perturbed Equilibrium Code (GPEC). The result is a smooth kinetic stability and NTV torque evolution over transport time scales. Work supported by DE-AC02-09CH11466.
Comment on "Time needed to board an airplane: a power law and the structure behind it".
Bernstein, Noam
2012-08-01
Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)] recently showed that for a simple model for the boarding of an airplane, the mean time to board scales as a power law with the number of passengers N and the exponent is less than 1. They note that this scaling leads to the prediction that the "back-to-front" strategy, where passengers are divided into groups from contiguous ranges of rows and each group is allowed to board in turn from back to front once the previous group has found their seats, has a longer boarding time than would a single group. Here I extend their results to a larger number of passengers using a sampling approach and explore a scenario where the queue is presorted into groups from back to front, but allowed to enter the plane as soon as they can. I show that the power law dependence on passenger numbers is different for large N and that there is a boarding time reduction for presorted groups, with a power law dependence on the number of presorted groups.
Highly multiplexed targeted proteomics using precise control of peptide retention time.
Gallien, Sebastien; Peterman, Scott; Kiyonami, Reiko; Souady, Jamal; Duriez, Elodie; Schoen, Alan; Domon, Bruno
2012-04-01
Large-scale proteomics applications using SRM analysis on triple quadrupole mass spectrometers present new challenges to LC-MS/MS experimental design. Despite the automation of building large-scale LC-SRM methods, the increased numbers of targeted peptides can compromise the balance between sensitivity and selectivity. To facilitate large target numbers, time-scheduled SRM transition acquisition is performed. Previously published results have demonstrated incorporation of a well-characterized set of synthetic peptides enabled chromatographic characterization of the elution profile for most endogenous peptides. We have extended this application of peptide trainer kits to not only build SRM methods but to facilitate real-time elution profile characterization that enables automated adjustment of the scheduled detection windows. Incorporation of dynamic retention time adjustments better facilitate targeted assays lasting several days without the need for constant supervision. This paper provides an overview of how the dynamic retention correction approach identifies and corrects for commonly observed LC variations. This adjustment dramatically improves robustness in targeted discovery experiments as well as routine quantification experiments. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tan, Seok Hong
2015-07-01
Planning and evaluation of health care services for children with disabilities requires information on their caregivers' needs. This paper aims to present the development and psychometric properties of the Caregiver Needs Scale (CNS), a scale assessing the needs of caregivers of children with disabilities aged 0-12 years in Malaysia. Development of the scale went through a multistage process of literature review, modification of an existing instrument, input from experts and feedback from service users. Literature review identified content domains and response options. An exploratory factor analysis (EFA) was undertaken to identify subscales of caregiver needs. The internal consistency reliability, convergent validity and discriminant validity of the new scale were examined. 273 caregivers of children with disabilities completed the fielded questionnaire. EFA revealed 4 subscales of caregiver needs: need for 'Help getting information and services for the child,' 'Help coping with the child,' 'Help getting child care' and 'Help with finances.' Three items with factor loading <0.4 were dropped. Cronbach's alpha coefficients of the subscales ranged from 0.813 to 0.903. Total CNS score correlated with number of child's needs and unmet needs. The score was also higher in families with financial and employment problems. A new instrument was developed to assess the needs of caregivers of children with disabilities for use in the Malaysian population. The CNS showed satisfactory psychometric properties but further examination is warranted to confirm its validity. Copyright © 2015 Elsevier Inc. All rights reserved.
Cheon, Chunhoo; Yoo, Jeong-Eun; Yoo, Hwa-Seung; Cho, Chong-Kwan; Kang, Sohyeon; Kim, Mia; Jang, Bo-Hyoung; Shin, Yong-Cheol; Ko, Seong-Gyu
2017-01-01
Anorexia occurs in about half of cancer patients and is associated with high mortality rate. However, safe and long-term use of anorexia treatment is still an unmet need. The purpose of the present study was to examine the feasibility of Sipjeondaebo-tang (Juzen-taiho-to, Shi-Quan-Da-Bu-Tang) for cancer-related anorexia. A total of 32 participants with cancer anorexia were randomized to either Sipjeondaebo-tang group or placebo group. Participants were given 3 g of Sipjeondaebo-tang or placebo 3 times a day for 4 weeks. The primary outcome was a change in the Anorexia/Cachexia Subscale of Functional Assessment of Anorexia/Cachexia Therapy (FAACT). The secondary outcomes included Visual Analogue Scale (VAS) of anorexia, FAACT scale, and laboratory tests. Anorexia and quality of life measured by FAACT and VAS were improved after 4 weeks of Sipjeondaebo-tang treatment. However, there was no significant difference between changes of Sipjeondaebo-tang group and placebo group. Sipjeondaebo-tang appears to have potential benefit for anorexia management in patients with cancer. Further large-scale studies are needed to ensure the efficacy. This trial is registered with ClinicalTrials.gov NCT02468141.
Connectivity in Agricultural Landscapes; do we Need More than a Dem?
NASA Astrophysics Data System (ADS)
Foster, I.; Boardman, J.; Favis-Mortlock, D.
2017-12-01
DEM's at a scale of metres to kilometres form the basis for many erosion models in part because data have long been available and published by national mapping agencies, such as the UK Ordnance Survey, and also because modelling gradient and flow pathways relative to topography is often simply executed within a GIS. That most landscape connectivity is not driven by topography is a simple issue that modellers appear reluctant to accept, or too challenging to model, yet there is an urgent need to rethink how landscapes function and what drives connectivity laterally and longitudinally at different spatial and temporal scales within agricultural landscapes. Landscape connectivity is driven by a combination of natural and anthropogenic factors that can enhance, reduce or eliminate connectivity at different timescales. In this paper we explore the use of a range of data sources that can be used to build a detailed picture of landscape connectivity at different scales. From a number of case studies we combine the use of maps, lidar data, field mapping, lake and floodplain coring fingerprinting and process monitoring to identify lateral and longitudinal connectivity and the way in which these have changed through time.
Very Long Baseline Interferometry: Dependencies on Frequency Stability
NASA Astrophysics Data System (ADS)
Nothnagel, Axel; Nilsson, Tobias; Schuh, Harald
2018-04-01
Very Long Baseline Interferometry (VLBI) is a differential technique observing radiation of compact extra-galactic radio sources with pairs of radio telescopes. For these observations, the frequency standards at the telescopes need to have very high stability. In this article we discuss why this is, and we investigate exactly how precise the frequency standards need to be. Four areas where good clock performance is needed are considered: coherence, geodetic parameter estimation, correlator synchronization, and UT1 determination. We show that in order to ensure the highest accuracy of VLBI, stability similar to that of a hydrogen maser is needed for time-scales up to a few hours. In the article, we are considering both traditional VLBI where extra-galactic radio sources are observed, as well as observation of man-made artificial radio sources emitted by satellites or spacecrafts.
The electroosmotic droplet switch: countering capillarity with electrokinetics.
Vogel, Michael J; Ehrhard, Peter; Steen, Paul H
2005-08-23
Electroosmosis, originating in the double-layer of a small liquid-filled pore (size R) and driven by a voltage V, is shown to be effective in pumping against the capillary pressure of a larger liquid droplet (size B) provided the dimensionless parameter sigmaR(2)/epsilon|zeta|VB is small enough. Here sigma is surface tension of the droplet liquid/gas interface, epsilon is the liquid dielectric constant, and zeta is the zeta potential of the solid/liquid pair. As droplet size diminishes, the voltage required to pump electroosmotically scales as V approximately R(2)/B. Accordingly, the voltage needed to pump against smaller higher-pressure droplets can actually decrease provided the pump poresize scales down with droplet size appropriately. The technological implication of this favorable scaling is that electromechanical transducers made of moving droplets, so-called "droplet transducers," become feasible. To illustrate, we demonstrate a switch whose bistable energy landscape derives from the surface energy of a droplet-droplet system and whose triggering derives from the electroosmosis effect. The switch is an electromechanical transducer characterized by individual addressability, fast switching time with low voltage, and no moving solid parts. We report experimental results for millimeter-scale droplets to verify key predictions of a mathematical model of the switch. With millimeter-size water droplets and micrometer-size pores, 5 V can yield switching times of 1 s. Switching time scales as B(3)/VR(2). Two possible "grab-and-release" applications of arrays of switches are described. One mimics the controlled adhesion of an insect, the palm beetle; the other uses wettability to move a particle along a trajectory.
Scale-up of industrial biodiesel production to 40 m(3) using a liquid lipase formulation.
Price, Jason; Nordblad, Mathias; Martel, Hannah H; Chrabas, Brent; Wang, Huali; Nielsen, Per Munk; Woodley, John M
2016-08-01
In this work, we demonstrate the scale-up from an 80 L fed-batch scale to 40 m(3) along with the design of a 4 m(3) continuous process for enzymatic biodiesel production catalyzed by NS-40116 (a liquid formulation of a modified Thermomyces lanuginosus lipase). Based on the analysis of actual pilot plant data for the transesterification of used cooking oil and brown grease, we propose a method applying first order integral analysis to fed-batch data based on either the bound glycerol or free fatty acid content in the oil. This method greatly simplifies the modeling process and gives an indication of the effect of mixing at the various scales (80 L to 40 m(3) ) along with the prediction of the residence time needed to reach a desired conversion in a CSTR. Suitable process metrics reflecting commercial performance such as the reaction time, enzyme efficiency, and reactor productivity were evaluated for both the fed-batch and CSTR cases. Given similar operating conditions, the CSTR operation on average, has a reaction time which is 1.3 times greater than the fed-batch operation. We also showed how the process metrics can be used to quickly estimate the selling price of the enzyme. Assuming a biodiesel selling price of 0.6 USD/kg and a one-time use of the enzyme (0.1% (w/woil ) enzyme dosage); the enzyme can then be sold for 30 USD/kg which ensures that that the enzyme cost is not more than 5% of the biodiesel revenue. Biotechnol. Bioeng. 2016;113: 1719-1728. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Role of Time-Scales in Socio-hydrology
NASA Astrophysics Data System (ADS)
Blöschl, Günter; Sivapalan, Murugesu
2016-04-01
Much of the interest in hydrological modeling in the past decades revolved around resolving spatial variability. With the rapid changes brought about by human impacts on the hydrologic cycle, there is now an increasing need to refocus on time dependency. We present a co-evolutionary view of hydrologic systems, in which every part of the system including human systems, co-evolve, albeit at different rates. The resulting coupled human-nature system is framed as a dynamical system, characterized by interactions of fast and slow time scales and feedbacks between environmental and social processes. This gives rise to emergent phenomena such as the levee effect, adaptation to change and system collapse due to resource depletion. Changing human values play a key role in the emergence of these phenomena and should therefore be considered as internal to the system in a dynamic way. The co-evolutionary approach differs from the traditional view of water resource systems analysis as it allows for path dependence, multiple equilibria, lock-in situations and emergent phenomena. The approach may assist strategic water management for long time scales through facilitating stakeholder participation, exploring the possibility space of alternative futures, and helping to synthesise the observed dynamics of different case studies. Future research opportunities include the study of how changes in human values are connected to human-water interactions, historical analyses of trajectories of system co-evolution in individual places and comparative analyses of contrasting human-water systems in different climate and socio-economic settings. Reference Sivapalan, M. and G. Blöschl (2015) Time scale interactions and the coevolution of humans and water. Water Resour. Res., 51, 6988-7022, doi:10.1002/2015WR017896.
Multiscale Modeling of Human-Water Interactions: The Role of Time-Scales
NASA Astrophysics Data System (ADS)
Bloeschl, G.; Sivapalan, M.
2015-12-01
Much of the interest in hydrological modeling in the past decades revolved around resolving spatial variability. With the rapid changes brought about by human impacts on the hydrologic cycle, there is now an increasing need to refocus on time dependency. We present a co-evolutionary view of hydrologic systems, in which every part of the system including human systems, co-evolve, albeit at different rates. The resulting coupled human-nature system is framed as a dynamical system, characterized by interactions of fast and slow time scales and feedbacks between environmental and social processes. This gives rise to emergent phenomena such as the levee effect, adaptation to change and system collapse due to resource depletion. Changing human values play a key role in the emergence of these phenomena and should therefore be considered as internal to the system in a dynamic way. The co-evolutionary approach differs from the traditional view of water resource systems analysis as it allows for path dependence, multiple equilibria, lock-in situations and emergent phenomena. The approach may assist strategic water management for long time scales through facilitating stakeholder participation, exploring the possibility space of alternative futures, and helping to synthesise the observed dynamics of different case studies. Future research opportunities include the study of how changes in human values are connected to human-water interactions, historical analyses of trajectories of system co-evolution in individual places and comparative analyses of contrasting human-water systems in different climate and socio-economic settings. Reference Sivapalan, M. and G. Blöschl (2015) Time Scale Interactions and the Co-evolution of Humans and Water. Water Resour. Res., 51, in press.
HYDROSCAPE: A SCAlable and ParallelizablE Rainfall Runoff Model for Hydrological Applications
NASA Astrophysics Data System (ADS)
Piccolroaz, S.; Di Lazzaro, M.; Zarlenga, A.; Majone, B.; Bellin, A.; Fiori, A.
2015-12-01
In this work we present HYDROSCAPE, an innovative streamflow routing method based on the travel time approach, and modeled through a fine-scale geomorphological description of hydrological flow paths. The model is designed aimed at being easily coupled with weather forecast or climate models providing the hydrological forcing, and at the same time preserving the geomorphological dispersion of the river network, which is kept unchanged independently on the grid size of rainfall input. This makes HYDROSCAPE particularly suitable for multi-scale applications, ranging from medium size catchments up to the continental scale, and to investigate the effects of extreme rainfall events that require an accurate description of basin response timing. Key feature of the model is its computational efficiency, which allows performing a large number of simulations for sensitivity/uncertainty analyses in a Monte Carlo framework. Further, the model is highly parsimonious, involving the calibration of only three parameters: one defining the residence time of hillslope response, one for channel velocity, and a multiplicative factor accounting for uncertainties in the identification of the potential maximum soil moisture retention in the SCS-CN method. HYDROSCAPE is designed with a simple and flexible modular structure, which makes it particularly prone to massive parallelization, customization according to the specific user needs and preferences (e.g., rainfall-runoff model), and continuous development and improvement. Finally, the possibility to specify the desired computational time step and evaluate streamflow at any location in the domain, makes HYDROSCAPE an attractive tool for many hydrological applications, and a valuable alternative to more complex and highly parametrized large scale hydrological models. Together with model development and features, we present an application to the Upper Tiber River basin (Italy), providing a practical example of model performance and characteristics.
NASA Astrophysics Data System (ADS)
Hamlet, A. F.; Chiu, C. M.; Sharma, A.; Byun, K.; Hanson, Z.
2016-12-01
Physically based hydrologic modeling of surface and groundwater resources that can be flexibly and efficiently applied to support water resources policy/planning/management decisions at a wide range of spatial and temporal scales are greatly needed in the Midwest, where stakeholder access to such tools is currently a fundamental barrier to basic climate change assessment and adaptation efforts, and also the co-production of useful products to support detailed decision making. Based on earlier pilot studies in the Pacific Northwest Region, we are currently assembling a suite of end-to-end tools and resources to support various kinds of water resources planning and management applications across the region. One of the key aspects of these integrated tools is that the user community can access gridded products at any point along the end-to-end chain of models, looking backwards in time about 100 years (1915-2015), and forwards in time about 85 years using CMIP5 climate model projections. The integrated model is composed of historical and projected future meteorological data based on station observations and statistical and dynamically downscaled climate model output respectively. These gridded meteorological data sets serve as forcing data for the macro-scale VIC hydrologic model implemented over the Midwest at 1/16 degree resolution. High-resolution climate model (4km WRF) output provides inputs for the analyses of urban impacts, hydrologic extremes, agricultural impacts, and impacts to the Great Lakes. Groundwater recharge estimated by the surface water model provides input data for fine-scale and macro-scale groundwater models needed for specific applications. To highlight the multi-scale use of the integrated models in support of co-production of scientific information for decision making, we briefly describe three current case studies addressing different spatial scales of analysis: 1) Effects of climate change on the water balance of the Great Lakes, 2) Future hydropower resources in the St. Joseph River basin, 3) Effects of climate change on carbon cycling in small lakes in the Northern Highland Lakes District.
NASA Astrophysics Data System (ADS)
Marzadri, A.; Tonina, D.; Bellin, A.
2012-12-01
We introduce a new Damköhler number, Da, to quantify the biogeochemical status of the hyporheic zone and to upscale local hyporheic processes to reach scale. Da is defined as the ratio between the median hyporheic residence time, τup,50, which is a representative time scale of the hyporheic flow, and a representative time scale of biogeochemical reactions, which we define as the time τlim needed to consume dissolved oxygen to a prescribed threshold concentration below which reducing reactions are activated: Da = τup,50/τlim. This approach accounts for streambed topography and surface hydraulics via the hyporheic residence time and biogeochemical reaction via the time limit τlim. Da can readily evaluate the redox status of the hyporheic zone. Values of Da larger than 1 indicate prevailing anaerobic conditions, whereas values smaller than 1 prevailing aerobic conditions. This new Damköhler number can quantify the efficiency of hyporheic zone in transforming dissolved inorganic nitrogen species such as ammonium and nitrate, whose transformation depends on the redox condition of the hyporheic zone. We define a particular value of Da, Das, that indicates when the hyporheic zone is a source or a sink of nitrate. This index depends only on the relative abundance of ammonium and nitrate. The approach can be applied to any hyporheic zone of which the median hyporheic residence time is known. Application to streams with pool-riffle morphology shows that Da increases passing from small to large streams implying that the fraction of the hyporheic zone in anaerobic conditions increases with stream size.
Time Correlations of Lightning Flash Sequences in Thunderstorms Revealed by Fractal Analysis
NASA Astrophysics Data System (ADS)
Gou, Xueqiang; Chen, Mingli; Zhang, Guangshu
2018-01-01
By using the data of lightning detection and ranging system at the Kennedy Space Center, the temporal fractal and correlation of interevent time series of lightning flash sequences in thunderstorms have been investigated with Allan factor (AF), Fano factor (FF), and detrended fluctuation analysis (DFA) methods. AF, FF, and DFA methods are powerful tools to detect the time-scaling structures and correlations in point processes. Totally 40 thunderstorms with distinguishing features of a single-cell storm and apparent increase and decrease in the total flash rate were selected for the analysis. It is found that the time-scaling exponents for AF (
Potential climatic impacts and reliability of very large-scale wind farms
NASA Astrophysics Data System (ADS)
Wang, C.; Prinn, R. G.
2010-02-01
Meeting future world energy needs while addressing climate change requires large-scale deployment of low or zero greenhouse gas (GHG) emission technologies such as wind energy. The widespread availability of wind power has fueled substantial interest in this renewable energy source as one of the needed technologies. For very large-scale utilization of this resource, there are however potential environmental impacts, and also problems arising from its inherent intermittency, in addition to the present need to lower unit costs. To explore some of these issues, we use a three-dimensional climate model to simulate the potential climate effects associated with installation of wind-powered generators over vast areas of land or coastal ocean. Using wind turbines to meet 10% or more of global energy demand in 2100, could cause surface warming exceeding 1 °C over land installations. In contrast, surface cooling exceeding 1 °C is computed over ocean installations, but the validity of simulating the impacts of wind turbines by simply increasing the ocean surface drag needs further study. Significant warming or cooling remote from both the land and ocean installations, and alterations of the global distributions of rainfall and clouds also occur. These results are influenced by the competing effects of increases in roughness and decreases in wind speed on near-surface turbulent heat fluxes, the differing nature of land and ocean surface friction, and the dimensions of the installations parallel and perpendicular to the prevailing winds. These results are also dependent on the accuracy of the model used, and the realism of the methods applied to simulate wind turbines. Additional theory and new field observations will be required for their ultimate validation. Intermittency of wind power on daily, monthly and longer time scales as computed in these simulations and inferred from meteorological observations, poses a demand for one or more options to ensure reliability, including backup generation capacity, very long distance power transmission lines, and onsite energy storage, each with specific economic and/or technological challenges.
Potential climatic impacts and reliability of very large-scale wind farms
NASA Astrophysics Data System (ADS)
Wang, C.; Prinn, R. G.
2009-09-01
Meeting future world energy needs while addressing climate change requires large-scale deployment of low or zero greenhouse gas (GHG) emission technologies such as wind energy. The widespread availability of wind power has fueled legitimate interest in this renewable energy source as one of the needed technologies. For very large-scale utilization of this resource, there are however potential environmental impacts, and also problems arising from its inherent intermittency, in addition to the present need to lower unit costs. To explore some of these issues, we use a three-dimensional climate model to simulate the potential climate effects associated with installation of wind-powered generators over vast areas of land or coastal ocean. Using wind turbines to meet 10% or more of global energy demand in 2100, could cause surface warming exceeding 1°C over land installations. In contrast, surface cooling exceeding 1°C is computed over ocean installations, but the validity of simulating the impacts of wind turbines by simply increasing the ocean surface drag needs further study. Significant warming or cooling remote from both the land and ocean installations, and alterations of the global distributions of rainfall and clouds also occur. These results are influenced by the competing effects of increases in roughness and decreases in wind speed on near-surface turbulent heat fluxes, the differing nature of land and ocean surface friction, and the dimensions of the installations parallel and perpendicular to the prevailing winds. These results are also dependent on the accuracy of the model used, and the realism of the methods applied to simulate wind turbines. Additional theory and new field observations will be required for their ultimate validation. Intermittency of wind power on daily, monthly and longer time scales as computed in these simulations and inferred from meteorological observations, poses a demand for one or more options to ensure reliability, including backup generation capacity, very long distance power transmission lines, and onsite energy storage, each with specific economic and/or technological challenges.
Kohlmann, Thomas; Wang, Cheng; Lipinski, Jens; Hadker, Nandini; Caffrey, Elizabeth; Epstein, Michael; Sadasivan, Ravi; Gondek, Kathleen
2013-06-01
Leading multiple sclerosis (MS) therapies have patient support programs (PSPs) aimed at improving patients' lives. There is limited knowledge about what drives patient satisfaction with PSPs and little evidence about its impact on patient-reported health status or health-related quality of life. The aims of this study were to evaluate patient needs and the PSP's role in meeting those needs; understand the drivers of PSP satisfaction and loyalty; and assess whether a MS PSP provides quantifiable, incremental benefit to patients, as measured by patient-reported health status, health state utility, and/or health-related quality of life. An Internet survey was conducted among 1,123 adult German MS patients currently enrolled in Bayer's German BETAPLUS PSP. Health status, health state utility, and health-related quality of life were measured using the EQ-5D Visual Analog Scale, the EQ-5D Index, and Short Form-12 Health Survey, respectively. MS patient needs vary by disease severity, duration of disease, and gender. Patients with greater self-reported needs and lower health status, health state utility, and health-related quality of life value and use the PSP more than other patients. Drivers of PSP satisfaction include use of patient hotline, nurse telephone calls, and mail education. Patients estimate that their health status would be 15 points lower if the PSP ceased to exist (translating to 0.15 on the time trade-off utility scale). This impact is significant, as it is nearly two times the minimally important difference. MS patients place inherent value on PSPs. From a patient's viewpoint, PSPs provide real incremental benefit in patient-reported health status at all stages of MS.
Visually Exploring Transportation Schedules.
Palomo, Cesar; Guo, Zhan; Silva, Cláudio T; Freire, Juliana
2016-01-01
Public transportation schedules are designed by agencies to optimize service quality under multiple constraints. However, real service usually deviates from the plan. Therefore, transportation analysts need to identify, compare and explain both eventual and systemic performance issues that must be addressed so that better timetables can be created. The purely statistical tools commonly used by analysts pose many difficulties due to the large number of attributes at trip- and station-level for planned and real service. Also challenging is the need for models at multiple scales to search for patterns at different times and stations, since analysts do not know exactly where or when relevant patterns might emerge and need to compute statistical summaries for multiple attributes at different granularities. To aid in this analysis, we worked in close collaboration with a transportation expert to design TR-EX, a visual exploration tool developed to identify, inspect and compare spatio-temporal patterns for planned and real transportation service. TR-EX combines two new visual encodings inspired by Marey's Train Schedule: Trips Explorer for trip-level analysis of frequency, deviation and speed; and Stops Explorer for station-level study of delay, wait time, reliability and performance deficiencies such as bunching. To tackle overplotting and to provide a robust representation for a large numbers of trips and stops at multiple scales, the system supports variable kernel bandwidths to achieve the level of detail required by users for different tasks. We justify our design decisions based on specific analysis needs of transportation analysts. We provide anecdotal evidence of the efficacy of TR-EX through a series of case studies that explore NYC subway service, which illustrate how TR-EX can be used to confirm hypotheses and derive new insights through visual exploration.
Fractals and Spatial Methods for Mining Remote Sensing Imagery
NASA Technical Reports Server (NTRS)
Lam, Nina; Emerson, Charles; Quattrochi, Dale
2003-01-01
The rapid increase in digital remote sensing and GIS data raises a critical problem -- how can such an enormous amount of data be handled and analyzed so that useful information can be derived quickly? Efficient handling and analysis of large spatial data sets is central to environmental research, particularly in global change studies that employ time series. Advances in large-scale environmental monitoring and modeling require not only high-quality data, but also reliable tools to analyze the various types of data. A major difficulty facing geographers and environmental scientists in environmental assessment and monitoring is that spatial analytical tools are not easily accessible. Although many spatial techniques have been described recently in the literature, they are typically presented in an analytical form and are difficult to transform to a numerical algorithm. Moreover, these spatial techniques are not necessarily designed for remote sensing and GIS applications, and research must be conducted to examine their applicability and effectiveness in different types of environmental applications. This poses a chicken-and-egg problem: on one hand we need more research to examine the usability of the newer techniques and tools, yet on the other hand, this type of research is difficult to conduct if the tools to be explored are not accessible. Another problem that is fundamental to environmental research are issues related to spatial scale. The scale issue is especially acute in the context of global change studies because of the need to integrate remote-sensing and other spatial data that are collected at different scales and resolutions. Extrapolation of results across broad spatial scales remains the most difficult problem in global environmental research. There is a need for basic characterization of the effects of scale on image data, and the techniques used to measure these effects must be developed and implemented to allow for a multiple scale assessment of the data before any useful process-oriented modeling involving scale-dependent data can be conducted. Through the support of research grants from NASA, we have developed a software module called ICAMS (Image Characterization And Modeling System) to address the need to develop innovative spatial techniques and make them available to the broader scientific communities. ICAMS provides new spatial techniques, such as fractal analysis, geostatistical functions, and multiscale analysis that are not easily available in commercial GIS/image processing software. By bundling newer spatial methods in a user-friendly software module, researchers can begin to test and experiment with the new spatial analysis methods and they can gauge scale effects using a variety of remote sensing imagery. In the following, we describe briefly the development of ICAMS and present application examples.
Perspective of Micro Process Engineering for Thermal Food Treatment
Mathys, Alexander
2018-01-01
Micro process engineering as a process synthesis and intensification tool enables an ultra-short thermal treatment of foods within milliseconds (ms) using very high surface-area-to-volume ratios. The innovative application of ultra-short pasteurization and sterilization at high temperatures, but with holding times within the range of ms would allow the preservation of liquid foods with higher qualities, thereby avoiding many unwanted reactions with different temperature–time characteristics. Process challenges, such as fouling, clogging, and potential temperature gradients during such conditions need to be assessed on a case by case basis and optimized accordingly. Owing to the modularity, flexibility, and continuous operation of micro process engineering, thermal processes from the lab to the pilot and industrial scales can be more effectively upscaled. A case study on thermal inactivation demonstrated the feasibility of transferring lab results to the pilot scale. It was shown that micro process engineering applications in thermal food treatment may be relevant to both research and industrial operations. Scaling of micro structured devices is made possible through the use of numbering-up approaches; however, reduced investment costs and a hygienic design must be assured. PMID:29686990
Kinetic energy budget during strong jet stream activity over the eastern United States
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Scoggins, J. R.
1980-01-01
Kinetic energy budgets are computed during a cold air outbreak in association with strong jet stream activity over the eastern United States. The period is characterized by large generation of kinetic energy due to cross-contour flow. Horizontal export and dissipation of energy to subgrid scales of motion constitute the important energy sinks. Rawinsonde data at 3 and 6 h intervals during a 36 h period are used in the analysis and reveal that energy fluctuations on a time scale of less than 12 h are generally small even though the overall energy balance does change considerably during the period in conjunction with an upper level trough which moves through the region. An error analysis of the energy budget terms suggests that this major change in the budget is not due to random errors in the input data but is caused by the changing synoptic situation. The study illustrates the need to consider the time and space scales of associated weather phenomena in interpreting energy budgets obtained through use of higher frequency data.
NASA Astrophysics Data System (ADS)
Gray, A. B.
2017-12-01
Watersheds with sufficient monitoring data have been predominantly found to display nonstationary suspended sediment dynamics, whereby the relationship between suspended sediment concentration and discharge changes over time. Despite the importance of suspended sediment as a keystone of geophysical and biochemical processes, and as a primary mediator of water quality, stationary behavior remains largely assumed in the context of these applications. This study presents an investigation into the time dependent behavior of small mountainous rivers draining the coastal ranges of the western continental US over interannual to interdecadal time scales. Of the 250+ small coastal (drainage area < 2x104 km2) watersheds in this region, only 23 have discharge associated suspended sediment concentration time series with base periods of 10 years or more. Event to interdecadal scale nonstationary suspended sediment dynamics were identified throughout these systems. Temporal patterns of non-stationary behavior provided some evidence for spatial coherence, which may be related to synoptic hydro-metrological patterns and regional scale changes in land use patterns. However, the results also highlight the complex, integrative nature of watershed scale fluvial suspended sediment dynamics. This underscores the need for in-depth, forensic approaches for initial processes identification, which require long term, high resolution monitoring efforts in order to adequately inform management. The societal implications of nonstationary sediment dynamics and their controls were further explored through the case of California, USA, where over 150 impairment listings have resulted in more than 50 sediment TMDLs, only 3 of which are flux based - none of which account for non-stationary behavior.
Memory Maintenance in Synapses with Calcium-Based Plasticity in the Presence of Background Activity
Higgins, David; Graupner, Michael; Brunel, Nicolas
2014-01-01
Most models of learning and memory assume that memories are maintained in neuronal circuits by persistent synaptic modifications induced by specific patterns of pre- and postsynaptic activity. For this scenario to be viable, synaptic modifications must survive the ubiquitous ongoing activity present in neural circuits in vivo. In this paper, we investigate the time scales of memory maintenance in a calcium-based synaptic plasticity model that has been shown recently to be able to fit different experimental data-sets from hippocampal and neocortical preparations. We find that in the presence of background activity on the order of 1 Hz parameters that fit pyramidal layer 5 neocortical data lead to a very fast decay of synaptic efficacy, with time scales of minutes. We then identify two ways in which this memory time scale can be extended: (i) the extracellular calcium concentration in the experiments used to fit the model are larger than estimated concentrations in vivo. Lowering extracellular calcium concentration to in vivo levels leads to an increase in memory time scales of several orders of magnitude; (ii) adding a bistability mechanism so that each synapse has two stable states at sufficiently low background activity leads to a further boost in memory time scale, since memory decay is no longer described by an exponential decay from an initial state, but by an escape from a potential well. We argue that both features are expected to be present in synapses in vivo. These results are obtained first in a single synapse connecting two independent Poisson neurons, and then in simulations of a large network of excitatory and inhibitory integrate-and-fire neurons. Our results emphasise the need for studying plasticity at physiological extracellular calcium concentration, and highlight the role of synaptic bi- or multistability in the stability of learned synaptic structures. PMID:25275319
A new large-scale manufacturing platform for complex biopharmaceuticals.
Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer
2012-12-01
Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.
Moreno-Murcia, Juan A; Martínez-Galindo, Celestina; Moreno-Pérez, Víctor; Marcos, Pablo J.; Borges, Fernanda
2012-01-01
This study aimed to cross-validate the psychometric properties of the Basic Psychological Needs in Exercise Scale (BPNES) by Vlachopoulos and Michailidou, 2006 in a Spanish context. Two studies were conducted. Confirmatory factor analysis results confirmed the hypothesized three-factor solution In addition, we documented evidence of reliability, analysed as internal consistency and temporal stability. Future studies should analyse the scale's validity and reliability with different populations and check their experimental effect. Key pointsThe Basic Psychological Needs in Exercise Scale (BPNES) is valid and reliable for measuring basic psychological needs in healthy physical exercise in the Spanish context.The factor structure of three correlated factors has shown minimal invariance across gender. PMID:24149130
Janeslätt, Gunnel; Lindstedt, Helena; Adolfsson, Päivi
2015-01-01
To describe daily time management in adults with and without mental disability and to examine differences in the level of their daily time management; to describe the possessions and use of electronic planning devices (EPDs) in activities and how environmental factors influence the use of EPDs in adults with mental disability. In a descriptive and cross-sectional design, 32 participants using EPDs and a matched comparison group of 32 healthy adults was recruited. Time-Self rating scale measuring daily time management was adapted for adults. A study specific questionnaire was applied to collect data on five ICF environmental factors. Rasch modelling, descriptive and non-parametric statistics were applied. Time-S has acceptable psychometric properties for use on adults with mental disability. People with mental disability and low level of daily time management who use advanced EPDs are more influenced by environmental factors. The study group perceived that encouragement and support from professionals as well as services influence their use of EPDs. Time-S can safely be used for people with mental disability. EPDs do not fully compensate the needs of the target-group. Prescribers need to give considerations to this and therefore they should be provided with more knowledge about this matter. Implications for Rehabilitation The Time-S can be applied for measuring daily time management in adults. Adults with mental disability provided with EPDs are not fully compensated in daily time management. Professional support and encouragement as well as backing from the services are important factors for the use of EPDs. Because the smart phones are not prescribed as assistive technology, the need for help from professionals to facilitate daily life is stressed. Therefore, the professionals should be provided with more knowledge about the use of EPDs.
Mapping Brazilian savanna vegetation gradients with Landsat time series
NASA Astrophysics Data System (ADS)
Schwieder, Marcel; Leitão, Pedro J.; da Cunha Bustamante, Mercedes Maria; Ferreira, Laerte Guimarães; Rabe, Andreas; Hostert, Patrick
2016-10-01
Global change has tremendous impacts on savanna systems around the world. Processes related to climate change or agricultural expansion threaten the ecosystem's state, function and the services it provides. A prominent example is the Brazilian Cerrado that has an extent of around 2 million km2 and features high biodiversity with many endemic species. It is characterized by landscape patterns from open grasslands to dense forests, defining a heterogeneous gradient in vegetation structure throughout the biome. While it is undisputed that the Cerrado provides a multitude of valuable ecosystem services, it is exposed to changes, e.g. through large scale land conversions or climatic changes. Monitoring of the Cerrado is thus urgently needed to assess the state of the system as well as to analyze and further understand ecosystem responses and adaptations to ongoing changes. Therefore we explored the potential of dense Landsat time series to derive phenological information for mapping vegetation gradients in the Cerrado. Frequent data gaps, e.g. due to cloud contamination, impose a serious challenge for such time series analyses. We synthetically filled data gaps based on Radial Basis Function convolution filters to derive continuous pixel-wise temporal profiles capable of representing Land Surface Phenology (LSP). Derived phenological parameters revealed differences in the seasonal cycle between the main Cerrado physiognomies and could thus be used to calibrate a Support Vector Classification model to map their spatial distribution. Our results show that it is possible to map the main spatial patterns of the observed physiognomies based on their phenological differences, whereat inaccuracies occurred especially between similar classes and data-scarce areas. The outcome emphasizes the need for remote sensing based time series analyses at fine scales. Mapping heterogeneous ecosystems such as savannas requires spatial detail, as well as the ability to derive important phenological parameters for monitoring habitats or ecosystem responses to climate change. The open Landsat and Sentinel-2 archives provide the satellite data needed for improved analyses of savanna ecosystems globally.
Tegel, Hanna; Yderland, Louise; Boström, Tove; Eriksson, Cecilia; Ukkonen, Kaisa; Vasala, Antti; Neubauer, Peter; Ottosson, Jenny; Hober, Sophia
2011-08-01
Protein production and analysis in a parallel fashion is today applied in laboratories worldwide and there is a great need to improve the techniques and systems used for this purpose. In order to save time and money, a fast and reliable screening method for analysis of protein production and also verification of the protein product is desired. Here, a micro-scale protocol for the parallel production and screening of 96 proteins in plate format is described. Protein capture was achieved using immobilized metal affinity chromatography and the product was verified using matrix-assisted laser desorption ionization time-of-flight MS. In order to obtain sufficiently high cell densities and product yield in the small-volume cultivations, the EnBase® cultivation technology was applied, which enables cultivation in as small volumes as 150 μL. Here, the efficiency of the method is demonstrated by producing 96 human, recombinant proteins, both in micro-scale and using a standard full-scale protocol and comparing the results in regard to both protein identity and sample purity. The results obtained are highly comparable to those acquired through employing standard full-scale purification protocols, thus validating this method as a successful initial screening step before protein production at a larger scale. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Imaging of forced-imbibition in carbonate rocks using synchrotron X-ray micro-tomography
NASA Astrophysics Data System (ADS)
Singh, K.; Menke, H. P.; Andrew, M. G.; Lin, Q.; Saif, T.; Al-Khulaifi, Y.; Reynolds, C. A.; Bijeljic, B.; Rau, C.; Blunt, M. J.
2016-12-01
We have investigated the pore-scale behavior of brine-oil systems and oil trapping during forced-imbibition in a water-wet carbonate rock in a capillary-dominated flow regime at reservoir pressure conditions. To capture the dynamics of the brine-oil front progression and snap-off process, real-time tomograms with a time resolution of 38 s (24 s for imaging and 14 s for recording the data) and a spatial resolution of 3.28 µm were acquired at Diamond Light Source (UK). The data were first analyzed at global scale (complete imaged rock) for overall front behavior. From the saturation profiles, we obtain the location of the tail of the desaturation front that progresses with a velocity of 13 µm/min. This velocity is smaller than average flow velocity 16.88 µm/min, which explains why it needs slightly more than 1 pore volume of brine injection to reach the residual saturation of oil in a water-wet rock. The data were further analyzed at local scale to investigate the pore-scale mechanisms of oil trapping during brine flooding. We isolated various trapping events which resulted in the creation of discrete oil ganglia occupying one to several pore bodies. We perform pore-scale curvature analysis of brine-oil interfaces to obtain local capillary pressure that will be related to the shape and the size of throats in which ganglia were trapped.
L-band Soil Moisture Mapping using Small UnManned Aerial Systems
NASA Astrophysics Data System (ADS)
Dai, E.
2015-12-01
Soil moisture is of fundamental importance to many hydrological, biological and biogeochemical processes, plays an important role in the development and evolution of convective weather and precipitation, and impacts water resource management, agriculture, and flood runoff prediction. The launch of NASA's Soil Moisture Active/Passive (SMAP) mission in 2015 promises to provide global measurements of soil moisture and surface freeze/thaw state at fixed crossing times and spatial resolutions as low as 5 km for some products. However, there exists a need for measurements of soil moisture on smaller spatial scales and arbitrary diurnal times for SMAP validation, precision agriculture and evaporation and transpiration studies of boundary layer heat transport. The Lobe Differencing Correlation Radiometer (LDCR) provides a means of mapping soil moisture on spatial scales as small as several meters (i.e., the height of the platform) .Compared with various other proposed methods of validation based on either situ measurements [1,2] or existing airborne sensors suitable for manned aircraft deployment [3], the integrated design of the LDCR on a lightweight small UAS (sUAS) is capable of providing sub-watershed (~km scale) coverage at very high spatial resolution (~15 m) suitable for scaling scale studies, and at comparatively low operator cost. The LDCR on Tempest unit can supply the soil moisture mapping with different resolution which is of order the Tempest altitude.
EDITORIAL: Special issue on time scale algorithms
NASA Astrophysics Data System (ADS)
Matsakis, Demetrios; Tavella, Patrizia
2008-12-01
This special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the tutorials presented on the first day. The symposium was attended by 76 persons, from every continent except Antarctica, by students as well as senior scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation's high reputation for hospitality. Although a timescale can be simply defined as a weighted average of clocks, whose purpose is to measure time better than any individual clock, timescale theory has long been and continues to be a vibrant field of research that has both followed and helped to create advances in the art of timekeeping. There is no perfect timescale algorithm, because every one embodies a compromise involving user needs. Some users wish to generate a constant frequency, perhaps not necessarily one that is well-defined with respect to the definition of a second. Other users might want a clock which is as close to UTC or a particular reference clock as possible, or perhaps wish to minimize the maximum variation from that standard. In contrast to the steered timescales that would be required by those users, other users may need free-running timescales, which are independent of external information. While no algorithm can meet all these needs, every algorithm can benefit from some form of tuning. The optimal tuning, and even the optimal algorithm, can depend on the noise characteristics of the frequency standards, or of their comparison systems, the most precise and accurate of which are currently Two Way Satellite Time and Frequency Transfer (TWSTFT) and GPS carrier phase time transfer. The interest in time scale algorithms and its associated statistical methodology began around 40 years ago when the Allan variance appeared and when the metrological institutions started realizing ensemble atomic time using more than one single atomic clock. An international symposium dedicated to these topics was initiated in 1972 as the first International Symposium on Atomic Time Scale Algorithms and it was the beginning of a series: 1st Symposium: organized at the NIST (NBS at that epoch) in 1972, 2nd Symposium: again at the NIST in 1982, 3rd Symposium: in Italy at the INRIM (IEN at that epoch) in 1988, 4th Symposium: in Paris at the BIPM in 2002 (see Metrologia 40 (3), 2003) 5th Symposium: in San Fernando, Spain at the ROA in 2008. The early symposia were concerned with establishing the basics of how to estimate and characterize the behavior of an atomic frequency standard in an unambiguous and clearly identifiable way, and how to combine the reading of different clocks to form an optimal time scale within a laboratory. Later, as atomic frequency standards began to be used as components in larger systems, interest grew in understanding the impact of a clock in a more complex environment. For example, use of clocks in telecommunication networks in a Synchronous Digital Hierarchy created a need to measure the maximum time error spanned by a clock in a certain interval. Timekeeping metrologists became interested in estimating time deviations and time stability, so they had to find ways to convert their common frequency characteristics to time characteristics. Tests of fundamental physics provided a motivation for launching atomic frequency standards into space in long-lasting missions, whose high-precision measurements might be available for only a few hours a day, yielding a series of clock data with many gaps and outliers for which a suitable statistical analysis was necessary to extract as much information as possible from the data. In the 21st century, the field has been transformed by the advent of atomic-clock-based Global Navigation Satellite Systems (GNSS), the steady increase in precision brought about by rapidly improving clocks and measurement systems, and the growing number of relatively inexpensive small clock ensembles. Although technological transformations have raised the intensity and changed the details of the debates, the VITSAS conference showed that even the issues raised by the early symposia are still current. This selection of papers encompasses the full breadth of the VITSAS, including tutorials, laboratory-specific innovations and practices, GNSS applications, UTC generation, TWSTFT applications, GPS applications, small-ensemble applications, robust algorithms, and statistical measures that are either robust themselves or which reflect nonstationarity and robustness characteristics of the clocks. The Editors of this special issue of Metrologia would like to express their thanks to the referees of the papers published here for all their hard work, to Drs Juan Palacio and Javier Galindo and the people of the ROA, and to all the attendees for the excellent symposium they have created.
NASA Astrophysics Data System (ADS)
Blume, Theresa; Weiler, Markus; Angermann, Lisa; Beiter, Daniel; Hassler, Sibylle; Kaplan, Nils; Lieder, Ernestine; Sprenger, Matthias
2017-04-01
Sustainable water resources management needs to be based on sound process understanding. This is especially true in a changing world, where boundary conditions change and models calibrated to the status quo are no longer helpful. There is a general agreement in the hydrologic community that we are in need of a better process understanding and that one of the most promising ways to achieve this is by using nested experimental designs that cover a range of scales. In the here presented study we argue that while we might be able to investigate a certain process at a plot or hillslope in detail, the real power of advancing our understanding lies in site intercomparison and if possible knowledge transfer and generalization. The experimental design of the CAOS observatory is based on sensor clusters measuring ground-, soil and stream water, sap flow and climate variables in 45 hydrological functional units which were chosen from a matrix of site characteristics (geology, land use, hillslope aspect, and topographic positions). This design allows for site intercomparisons that are based on more than one member per class and thus does not only characterize between class differences but also attempts to identify within-class variability. These distributed plot scale investigations offer a large amount of information on plot scale processes and their variability in space and time (e.g. water storage dynamics and patterns, vertical flow processes and vadose zone transit times, transpiration dynamics and patterns). However, if we want to improve our understanding of runoff generation (and thus also of nutrient and contaminant transport and export to the stream) we need to also understand how these plots link up within hillslopes and how and when these hillslopes are connected to the stream. And certainly, this is again most helpful if we do not focus on single sites but attempt experimental designs that aim at intercomparison and generalization. At the same time, the investigation of hillslope-stream connectivity is extremely challenging due to the fact that there is a high 4-dimensional variability of the involved processes and most of them are hidden from view in the subsurface. To tackle this challenge we employed a number of different field methods ranging from hillslope scale irrigation and flow-through experiments, to in depth analyses of near stream piezometer responses and stream reach tracer experiments, and then moving on to the mesoscale catchment with network wide investigations of spatial patterns of stream temperature and electric conductivity as well as of the expansion and shrinkage of the network itself. In this presentation we will provide an overview of the rationale, approach, experimental design and ongoing work, the challenges we encountered and a synthesis of exemplary results.
An Integrated Knowledge Framework to Characterize and Scaffold Size and Scale Cognition (FS2C)
NASA Astrophysics Data System (ADS)
Magana, Alejandra J.; Brophy, Sean P.; Bryan, Lynn A.
2012-09-01
Size and scale cognition is a critical ability associated with reasoning with concepts in different disciplines of science, technology, engineering, and mathematics. As such, researchers and educators have identified the need for young learners and their educators to become scale-literate. Informed by developmental psychology literature and recent findings in nanoscale science and engineering education, we propose an integrated knowledge framework for characterizing and scaffolding size and scale cognition called the FS2C framework. Five ad hoc assessment tasks were designed informed by the FS2C framework with the goal of identifying participants' understandings of size and scale. Findings identified participants' difficulties to discern different sizes of microscale and nanoscale objects and a low level of sophistication on identifying scale worlds among participants. Results also identified that as bigger the difference between the sizes of the objects is, the more difficult was for participants to identify how many times an object is bigger or smaller than another one. Similarly, participants showed difficulties to estimate approximate sizes of sub-macroscopic objects as well as a difficulty for participants to estimate the size of very large objects. Participants' accurate location of objects on a logarithmic scale was also challenging.
NASA Astrophysics Data System (ADS)
Yang, X.; Scheibe, T. D.; Chen, X.; Hammond, G. E.; Song, X.
2015-12-01
The zone in which river water and groundwater mix plays an important role in natural ecosystems as it regulates the mixing of nutrients that control biogeochemical transformations. Subsurface heterogeneity leads to local hotspots of microbial activity that are important to system function yet difficult to resolve computationally. To address this challenge, we are testing a hybrid multiscale approach that couples models at two distinct scales, based on field research at the U. S. Department of Energy's Hanford Site. The region of interest is a 400 x 400 x 20 m macroscale domain that intersects the aquifer and the river and contains a contaminant plume. However, biogeochemical activity is high in a thin zone (mud layer, <1 m thick) immediately adjacent to the river. This microscale domain is highly heterogeneous and requires fine spatial resolution to adequately represent the effects of local mixing on reactions. It is not computationally feasible to resolve the full macroscale domain at the fine resolution needed in the mud layer, and the reaction network needed in the mud layer is much more complex than that needed in the rest of the macroscale domain. Hence, a hybrid multiscale approach is used to efficiently and accurately predict flow and reactive transport at both scales. In our simulations, models at both scales are simulated using the PFLOTRAN code. Multiple microscale simulations in dynamically defined sub-domains (fine resolution, complex reaction network) are executed and coupled with a macroscale simulation over the entire domain (coarse resolution, simpler reaction network). The objectives of the research include: 1) comparing accuracy and computing cost of the hybrid multiscale simulation with a single-scale simulation; 2) identifying hot spots of microbial activity; and 3) defining macroscopic quantities such as fluxes, residence times and effective reaction rates.
Variability in vegetation effects on density and nesting success of grassland birds
Winter, Maiken; Johnson, Douglas H.; Shaffer, Jill A.
2005-01-01
The structure of vegetation in grassland systems, unlike that in forest systems, varies dramatically among years on the same sites, and among regions with similar vegetation. The role of this variation in vegetation structure on bird density and nesting success of grassland birds is poorly understood, primarily because few studies have included sufficiently large temporal and spatial scales to capture the variation in vegetation structure, bird density, or nesting success. To date, no large-scale study on grassland birds has been conducted to investigate whether grassland bird density and nesting success respond similarly to changes in vegetation structure. However, reliable management recommendations require investigations into the distribution and nesting success of grassland birds over larger temporal and spatial scales. In addition, studies need to examine whether bird density and nesting success respond similarly to changing environmental conditions. We investigated the effect of vegetation structure on the density and nesting success of 3 grassland-nesting birds: clay-colored sparrow (Spizella pallida), Savannah sparrow (Passerculus sandwichensis), and bobolink (Dolichonyx oryzivorus) in 3 regions of the northern tallgrass prairie in 1998-2001. Few vegetation features influenced the densities of our study species, and each species responded differently to those vegetation variables. We could identify only 1 variable that clearly influenced nesting success of 1 species: clay-colored sparrow nesting success increased with increasing percentage of nest cover from the surrounding vegetation. Because responses of avian density and nesting success to vegetation measures varied among regions, years, and species, land managers at all times need to provide grasslands with different types of vegetation structure. Management guidelines developed from small-scale, short-term studies may lead to misrepresentations of the needs of grassland-nesting birds.
Adaptive LES Methodology for Turbulent Flow Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleg V. Vasilyev
2008-06-12
Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulationsmore » that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic turbulence have recently been completed at the Japanese Earth Simulator (Yokokawa et al. 2002, Kaneda et al. 2003) using a resolution of 40963 (approximately 10{sup 11}) grid points with a Taylor-scale Reynolds number of 1217 (Re {approx} 10{sup 6}). Impressive as these calculations are, performed on one of the world's fastest super computers, more brute computational power would be needed to simulate the flow over the fuselage of a commercial aircraft at cruising speed. Such a calculation would require on the order of 10{sup 16} grid points and would have a Reynolds number in the range of 108. Such a calculation would take several thousand years to simulate one minute of flight time on today's fastest super computers (Moin & Kim 1997). Even using state-of-the-art zonal approaches, which allow DNS calculations that resolve the necessary range of scales within predefined 'zones' in the flow domain, this calculation would take far too long for the result to be of engineering interest when it is finally obtained. Since computing power, memory, and time are all scarce resources, the problem of simulating turbulent flows has become one of how to abstract or simplify the complexity of the physics represented in the full Navier-Stokes (NS) equations in such a way that the 'important' physics of the problem is captured at a lower cost. To do this, a portion of the modes of the turbulent flow field needs to be approximated by a low order model that is cheaper than the full NS calculation. This model can then be used along with a numerical simulation of the 'important' modes of the problem that cannot be well represented by the model. The decision of what part of the physics to model and what kind of model to use has to be based on what physical properties are considered 'important' for the problem. It should be noted that 'nothing is free', so any use of a low order model will by definition lose some information about the original flow.« less
Richardson, J K; Sandman, D; Vela, S
2001-02-01
To determine the effect of a specific exercise regimen on clinical measures of postural stability and confidence in a population with peripheral neuropathy (PN). Prospective, controlled, single blind study. Outpatient clinic of a university hospital. Twenty subjects with diabetes mellitus and electrodiagnostically confirmed PN. Ten subjects underwent a 3-week intervention exercise regimen designed to increase rapidly available distal strength and balance. The other 10 subjects performed a control exercise regimen. Unipedal stance time, functional reach, tandem stance time, and score on the activities-specific balance and confidence (ABC) scale. The intervention subjects, but not the control subjects, showed significant improvement in all 3 clinical measures of balance and nonsignificant improvement on the ABC scale. A brief, specific exercise regimen improved clinical measures of balance in patients with diabetic PN. Further studies are needed to determine if this result translates into a lower fall frequency in this high-risk population.
Global patterns of phytoplankton dynamics in coastal ecosystems
Paerl, H.; Yin, Kedong; Cloern, J.
2011-01-01
Scientific Committee on Ocean Research Working Group 137 Meeting; Hangzhou, China, 17-21 October 2010; Phytoplankton biomass and community structure have undergone dramatic changes in coastal ecosystems over the past several decades in response to climate variability and human disturbance. These changes have short- and long-term impacts on global carbon and nutrient cycling, food web structure and productivity, and coastal ecosystem services. There is a need to identify the underlying processes and measure the rates at which they alter coastal ecosystems on a global scale. Hence, the Scientific Committee on Ocean Research (SCOR) formed Working Group 137 (WG 137), "Global Patterns of Phytoplankton Dynamics in Coastal Ecosystems: A Comparative Analysis of Time Series Observations" (http://wg137.net/). This group evolved from a 2007 AGU-sponsored Chapman Conference entitled "Long Time-Series Observations in Coastal Ecosystems: Comparative Analyses of Phytoplankton Dynamics on Regional to Global Scales.".
Optimally stopped variational quantum algorithms
NASA Astrophysics Data System (ADS)
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
Global Change: A Biogeochemical Perspective
NASA Technical Reports Server (NTRS)
Mcelroy, M.
1983-01-01
A research program that is designed to enhance our understanding of the Earth as the support system for life is described. The program change, both natural and anthropogenic, that might affect the habitability of the planet on a time scale roughly equal to that of a human life is studied. On this time scale the atmosphere, biosphere, and upper ocean are treated as a single coupled system. The need for understanding the processes affecting the distribution of essential nutrients--carbon, nitrogen, phosphorous, sulfur, and water--within this coupled system is examined. The importance of subtle interactions among chemical, biological, and physical effects is emphasized. The specific objectives are to define the present state of the planetary life-support system; to ellucidate the underlying physical, chemical, and biological controls; and to provide the body of knowledge required to assess changes that might impact the future habitability of the Earth.
Observational data needs for plasma phenomena
NASA Technical Reports Server (NTRS)
Niedner, M. B., Jr.
1981-01-01
Bright comets display a rich variety of interesting plasma phenomena which occur over an enormous range of spatial scales, and which require different observational techniques to be studied effectively. Wide-angle photography of high time resolution is probably the best method of studying the phenomenon of largest known scale: the plasma tail disconnection event (DE), which has been attributed to magnetic reconnection at interplanetary sector boundary crossings. These structures usually accelerate as they recede from the head region and observed velocities are typically in the range 50 V km/s. They are often visible for several days following the time of disconnection, and are sometimes seen out past 0.2 AU from the cometary head. The following areas pertaining to plasma phenomena in the ionoshere are addressed: the existence, size, and heliocentric distance variations of the contact surface, and the observational signatures of magnetic reconnection at sector boundary crossings.
NASA Technical Reports Server (NTRS)
Frouin, Robert
1993-01-01
Current satellite algorithms to estimate photosynthetically available radiation (PAR) at the earth' s surface are reviewed. PAR is deduced either from an insolation estimate or obtained directly from top-of-atmosphere solar radiances. The characteristics of both approaches are contrasted and typical results are presented. The inaccuracies reported, about 10 percent and 6 percent on daily and monthly time scales, respectively, are useful to model oceanic and terrestrial primary productivity. At those time scales variability due to clouds in the ratio of PAR and insolation is reduced, making it possible to deduce PAR directly from insolation climatologies (satellite or other) that are currently available or being produced. Improvements, however, are needed in conditions of broken cloudiness and over ice/snow. If not addressed properly, calibration/validation issues may prevent quantitative use of the PAR estimates in studies of climatic change. The prospects are good for an accurate, long-term climatology of PAR over the globe.
Measuring Maslow's hierarchy of needs.
Lester, David
2013-08-01
Two scales have been proposed to measure Maslow's hierarchy of needs in college students, one by Lester (1990) and one by Strong and Fiebert (1987). In a sample of 51 college students, scores on the corresponding scales for the five needs did not correlate significantly and positively, except for the measures of physiological needs. Furthermore, there was limited support for Maslow's hypothesis that need deprivation would predict psychopathology (specifically, mania and depression).
NASA Astrophysics Data System (ADS)
Vrieling, Anton; Hoedjes, Joost C. B.; van der Velde, Marijn
2015-04-01
Efforts to map and monitor soil erosion need to account for the erratic nature of the soil erosion process. Soil erosion by water occurs on sloped terrain when erosive rainfall and consequent surface runoff impact soils that are not well-protected by vegetation or other soil protective measures. Both rainfall erosivity and vegetation cover are highly variable through space and time. Due to data paucity and the relative ease of spatially overlaying geographical data layers into existing models like USLE (Universal Soil Loss Equation), many studies and mapping efforts merely use average annual values for erosivity and vegetation cover as input. We first show that rainfall erosivity can be estimated from satellite precipitation data. We obtained average annual erosivity estimates from 15 yr of 3-hourly TRMM Multi-satellite Precipitation Analysis (TMPA) data (1998-2012) using intensity-erosivity relationships. Our estimates showed a positive correlation (r = 0.84) with long-term annual erosivity values of 37 stations obtained from literature. Using these TMPA erosivity retrievals, we demonstrate the large interannual variability, with maximum annual erosivity often exceeding two to three times the mean value, especially in semi-arid areas. We then calculate erosivity at a 10-daily time-step and combine this with vegetation cover development for selected locations in Africa using NDVI - normalized difference vegetation index - time series from SPOT VEGETATION. Although we do not integrate the data at this point, the joint analysis of both variables stresses the need for joint accounting for erosivity and vegetation cover for large-scale erosion assessment and monitoring.
Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model
NASA Astrophysics Data System (ADS)
Mukhopadhyay, S.; Arumugam, S.
2017-12-01
Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior under varied global and local scale climatic influences from the developed BHMM.
A strategic approach for Water Safety Plans implementation in Portugal.
Vieira, Jose M P
2011-03-01
Effective risk assessment and risk management approaches in public drinking water systems can benefit from a systematic process for hazards identification and effective management control based on the Water Safety Plan (WSP) concept. Good results from WSP development and implementation in a small number of Portuguese water utilities have shown that a more ambitious nationwide strategic approach to disseminate this methodology is needed. However, the establishment of strategic frameworks for systematic and organic scaling-up of WSP implementation at a national level requires major constraints to be overcome: lack of legislation and policies and the need for appropriate monitoring tools. This study presents a framework to inform future policy making by understanding the key constraints and needs related to institutional, organizational and research issues for WSP development and implementation in Portugal. This methodological contribution for WSP implementation can be replicated at a global scale. National health authorities and the Regulator may promote changes in legislation and policies. Independent global monitoring and benchmarking are adequate tools for measuring the progress over time and for comparing the performance of water utilities. Water utilities self-assessment must include performance improvement, operational monitoring and verification. Research and education and resources dissemination ensure knowledge acquisition and transfer.
Time Horizon and Social Scale in Communication
NASA Astrophysics Data System (ADS)
Krantz, D. H.
2010-12-01
In 2009 our center (CRED) published a first version of The Psychology of Climate Change Communication. In it, we attempted to summarize facts and concepts from psychological research that could help guide communication. While this work focused on climate change, most of the ideas are at least partly applicable for communication about a variety of natural hazards. Of the many examples in this guide, I mention three. Single-action bias is the human tendency to stop considering further actions that might be needed to deal with a given hazard, once a single action has been taken. Another example is the importance of group affiliation in motivating voluntary contributions to joint action. A third concerns the finding that group participation enhances understanding of probabilistic concepts and promotes action in the face of uncertainty. One current research direction, which goes beyond those included in the above publication, focuses on how time horizons arise in the thinking of individuals and groups, and how these time horizons might influence hazard preparedness. On the one hand, individuals sometimes appear impatient, organizations look for immediate results, and officials fail to look beyond the next election cycle. Yet under some laboratory conditions and in some subcultures, a longer time horizon is adopted. We are interested in how time horizon is influenced by group identity and by the very architecture of planning and decision making. Institutional changes, involving long-term contractual relationships among communities, developers, insurers, and governments, could greatly increase resilience in the face of natural hazards. Communication about hazards, in the context of such long-term contractual relationships might look very different from communication that is first initiated by immediate threat. Another new direction concerns the social scale of institutions and of communication about hazards. Traditionally, insurance contracts share risk among a large number of insurees: each contributes a small premium toward a fund that is adequate to cover the large losses that occasionally occur. Participatory processes are needed that extend risk sharing to larger social scales and that reduce adversarial relationships between insurers, insurees, insurance regulators, and governments that intervene or fail to intervene on an ad hoc rather than a contractual basis.
Cascading events in linked ecological and socioeconomic systems
Peters, Debra P.C.; Sala, O.E.; Allen, Craig D.; Covich, A.; Brunson, M.
2007-01-01
Cascading events that start at small spatial scales and propagate non-linearly through time to influence larger areas often have major impacts on ecosystem goods and services. Events such as wildfires and hurricanes are increasing in frequency and magnitude as systems become more connected through globalization processes. We need to improve our understanding of these events in order to predict their occurrence, minimize potential impacts, and allow for strategic recovery. Here, we synthesize information about cascading events in systems located throughout the Americas. We discuss a variety of examples of cascading events that share a common feature: they are often driven by linked ecological and human processes across scales. In this era of globalization, we recommend studies that explicitly examine connections across scales and examine the role of connectivity among non-contiguous as well as contiguous areas.
Ferdinand, R F; Verhulst, F C
1994-06-01
The ability of the Young Adult Self-Report (YASR), the Symptom Checklist (SCL-90) and the General Health Questionnaire (GHQ-28) to predict maladjustment across a 2-year time-span was assessed in a general population sample of 528 18- to 22-year-olds. Referral for mental health services and need for professional help were predicted by total problem scores of the YASR, the GHQ-28 and the SCL-90 and by the internalizing scale of the YASR. Furthermore, the internalizing scale predicted suicide attempts or suicidal ideation, whereas the externalizing scale predicted police contacts. The YASR delinquent behavior syndrome was the only significant predictor of alcohol abuse. The findings supported the validity of the YASR as an instrument for the assessment of psychopathology in young adults.
Criticality and Phase Transition in Stock-Price Fluctuations
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2006-02-01
We analyze the behavior of the U.S. S&P 500 index from 1984 to 1995, and characterize the non-Gaussian probability density functions (PDF) of the log returns. The temporal dependence of fat tails in the PDF of a ten-minute log return shows a gradual, systematic increase in the probability of the appearance of large increments on approaching black Monday in October 1987, reminiscent of parameter tuning towards criticality. On the occurrence of the black Monday crash, this culminates in an abrupt transition of the scale dependence of the non-Gaussian PDF towards scale-invariance characteristic of critical behavior. These facts suggest the need for revisiting the turbulent cascade paradigm recently proposed for modeling the underlying dynamics of the financial index, to account for time varying—phase transitionlike and scale invariant-critical-like behavior.
Krasny-Pacini, A; Pauly, F; Hiebel, J; Godon, S; Isner-Horobeti, M-E; Chevignard, M
2017-07-01
Goal Attainment Scaling (GAS) is a method for writing personalized evaluation scales to quantify progress toward defined goals. It is useful in rehabilitation but is hampered by the experience required to adequately "predict" the possible outcomes relating to a particular goal before treatment and the time needed to describe all 5 levels of the scale. Here we aimed to investigate the feasibility of using GAS in a clinical setting of a pediatric spasticity clinic with a shorter method, the "3-milestones" GAS (goal setting with 3 levels and goal rating with the classical 5 levels). Secondary aims were to (1) analyze the types of goals children's therapists set for botulinum toxin treatment and (2) compare the score distribution (and therefore the ability to predict outcome) by goal type. Therapists were trained in GAS writing and prepared GAS scales in the regional spasticity-management clinic they attended with their patients and families. The study included all GAS scales written during a 2-year period. GAS score distribution across the 5 GAS levels was examined to assess whether the therapist could reliably predict outcome and whether the 3-milestones GAS yielded similar distributions as the original GAS method. In total, 541 GAS scales were written and showed the expected score distribution. Most scales (55%) referred to movement quality goals and fewer (29%) to family goals and activity domains. The 3-milestones GAS method was feasible within the time constraints of the spasticity clinic and could be used by local therapists in cooperation with the hospital team. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi
2014-07-01
Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, X.; Tu, C. Y.; He, J.; Wang, L.
2017-12-01
The spectrum break at the ion scale of the solar wind magnetic fluctuations are considered to give important clue on the turbulence dissipation mechanism. Among several possible mechanisms, the most notable ones are the two mechanisms that related respectively with proton thermal gyro-radius and proton inertial length. However, no definite conclusion has been given for which one is more reasonable because the two parameters have similar values in the normal plasma beta range. Here we do a statistical study for the first time to see if the two mechanism predictions have different dependence on the solar wind velocity and on the plasma beta in the normal plasma beta range in the solar wind at 1 AU. From magnetic measurements by Wind, Ulysses and Messenger, we select 60 data sets with duration longer than 8 hours. We found that the ratio between the proton inertial scale and the spectrum break scale do not change considerably with both varying the solar wind speed from 300km/s to 800km/s and varying the plasma beta from 0.2 to 1.4. The average value of the ratio times 2pi is 0.46 ± 0.08. However, the ratio between the proton gyro-radius and the break scale changes clearly. This new result shows that the proton inertial scale could be a single factor that determines the break length scale and hence gives a strong evidence to support the dissipation mechanism related to it in the normal plasma beta range. The value of the constant ratio may relate with the dissipation mechanism, but it needs further theoretical study to give detailed explanation.
NASA Astrophysics Data System (ADS)
Mathur, R.
2009-12-01
Emerging regional scale atmospheric simulation models must address the increasing complexity arising from new model applications that treat multi-pollutant interactions. Sophisticated air quality modeling systems are needed to develop effective abatement strategies that focus on simultaneously controlling multiple criteria pollutants as well as use in providing short term air quality forecasts. In recent years the applications of such models is continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physical and chemical atmospheric processes occurring at these disparate spatial and temporal scales requires the use of observation data beyond traditional in-situ networks so that the model simulations can be reasonably constrained. Preliminary applications of assimilation of remote sensing and aloft observations within a comprehensive regional scale atmospheric chemistry-transport modeling system will be presented: (1) A methodology is developed to assimilate MODIS aerosol optical depths in the model to represent the impacts long-range transport associated with the summer 2004 Alaskan fires on surface-level regional fine particulate matter (PM2.5) concentrations across the Eastern U.S. The episodic impact of this pollution transport event on PM2.5 concentrations over the eastern U.S. during mid-July 2004, is quantified through the complementary use of the model with remotely-sensed, aloft, and surface measurements; (2) Simple nudging experiments with limited aloft measurements are performed to identify uncertainties in model representations of physical processes and assess the potential use of such measurements in improving the predictive capability of atmospheric chemistry-transport models. The results from these early applications will be discussed in context of uncertainties in the model and in the remote sensing data and needs for defining a future optimum observing strategy.
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
Native fish conservation areas: A vision for large-scale conservation of native fish communities
Williams, Jack E.; Williams, Richard N.; Thurow, Russell F.; Elwell, Leah; Philipp, David P.; Harris, Fred A.; Kershner, Jeffrey L.; Martinez, Patrick J.; Miller, Dirk; Reeves, Gordon H.; Frissell, Christopher A.; Sedell, James R.
2011-01-01
The status of freshwater fishes continues to decline despite substantial conservation efforts to reverse this trend and recover threatened and endangered aquatic species. Lack of success is partially due to working at smaller spatial scales and focusing on habitats and species that are already degraded. Protecting entire watersheds and aquatic communities, which we term "native fish conservation areas" (NFCAs), would complement existing conservation efforts by protecting intact aquatic communities while allowing compatible uses. Four critical elements need to be met within a NFCA: (1) maintain processes that create habitat complexity, diversity, and connectivity; (2) nurture all of the life history stages of the fishes being protected; (3) include a long-term enough watershed to provide long-term persistence of native fish populations; and (4) provide management that is sustainable over time. We describe how a network of protected watersheds could be created that would anchor aquatic conservation needs in river basins across the country.
European monitoring for raptors and owls: state of the art and future needs.
Kovács, András; Mammen, Ubbo C C; Wernham, Chris V
2008-09-01
Sixty-four percent of the 56 raptor and owl species that occur in Europe have an unfavorable conservation status. As well as requiring conservation measures in their own right, raptors and owls function as useful sentinels of wider environmental "health," because they are widespread top predators, relatively easy to monitor, and sensitive to environmental changes at a range of geographical scales. At a time of global acknowledgment of an increasing speed of biodiversity loss, and new, forward-looking and related European Union biodiversity policy, there is an urgent need to improve coordination at a pan-European scale of national initiatives that seek to monitor raptor populations. Here we describe current initiatives that make a contribution to this aim, particularly the current "MEROS" program, the results of a questionnaire survey on the current state of national raptor monitoring across 22 BirdLife Partners in Europe, the challenges faced by any enhanced pan-European monitoring scheme for raptors, and some suggested pathways for efficiently tapping expertise to contribute to such an initiative.
Kastello, Jennifer C; Jacobsen, Kathryn H; Gaffney, Kathleen F; Kodadek, Marie P; Bullock, Linda C; Sharps, Phyllis W
2015-11-01
The purpose of the current study was to evaluate the validity of a single-item, self-rated mental health (SRMH) measure in the identification of women at risk for depression and posttraumatic stress disorder (PTSD). Baseline data of 239 low-income women participating in an intimate partner violence (IPV) intervention study were analyzed. PTSD was measured with the Davidson Trauma Scale. Risk for depression was determined using the Edinburgh Postnatal Depression Scale. SRMH was assessed with a single item asking participants to rate their mental health at the time of the baseline interview. Single-item measures can be an efficient way to increase the proportion of patients screened for mental health disorders. Although SRMH is not a strong indicator of PTSD, it may be useful in identifying pregnant women who are at increased risk for depression and need further comprehensive assessment in the clinical setting. Future research examining the use of SRMH among high-risk populations is needed. Copyright 2015, SLACK Incorporated.
Carpentieri, Matteo; Kumar, Prashant; Robins, Alan
2011-03-01
Understanding the transformation of nanoparticles emitted from vehicles is essential for developing appropriate methods for treating fine scale particle dynamics in dispersion models. This article provides an overview of significant research work relevant to modelling the dispersion of pollutants, especially nanoparticles, in the wake of vehicles. Literature on vehicle wakes and nanoparticle dispersion is reviewed, taking into account field measurements, wind tunnel experiments and mathematical approaches. Field measurements and modelling studies highlighted the very short time scales associated with nanoparticle transformations in the first stages after the emission. These transformations strongly interact with the flow and turbulence fields immediately behind the vehicle, hence the need of characterising in detail the mixing processes in the vehicle wake. Very few studies have analysed this interaction and more research is needed to build a basis for model development. A possible approach is proposed and areas of further investigation identified. Copyright © 2010 Elsevier Ltd. All rights reserved.
Deterministic object tracking using Gaussian ringlet and directional edge features
NASA Astrophysics Data System (ADS)
Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.
2017-10-01
Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.
Adaptive and accelerated tracking-learning-detection
NASA Astrophysics Data System (ADS)
Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu
2013-08-01
An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.
NASA Astrophysics Data System (ADS)
Band, L. E.; Lin, L.; Duncan, J. M.
2017-12-01
A major challenge in understanding and managing freshwater volumes and quality in mixed land use catchments is the detailed heterogeneity of topography, soils, canopy, and inputs of water and biogeochemicals. The short space and time scale dynamics of sources, transport and processing of water, carbon and nitrogen in natural and built environments can have a strong influence on the timing and magnitude of watershed runoff and nutrient production, ecosystem cycling and export. Hydroclimate variability induces a functional interchange of terrestrial and aquatic environments across their transition zone with the temporal and spatial expansion and contraction of soil wetness, standing and flowing water over seasonal, diurnal and storm event time scales. Variation in sources and retention of nutrients at these scales need to be understood and represented to design optimal mitigation strategies. This paper discusses the conceptual framework used to design both simulation and measurement approaches, and explores these dynamics using an integrated terrestrial-aquatic watershed model of coupled water-carbon-nitrogen processes at resolutions necessary to resolve "hot spot/hot moment" phenomena in two well studied catchments in Long Term Ecological Research sites. The potential utility of this approach for design and assessment of urban green infrastructure and stream restoration strategies is illustrated.
Sadhasivam, Senthilkumar; Cohen, Lindsey L; Hosu, Liana; Gorman, Kristin L; Wang, Yu; Nick, Todd G; Jou, Jing Fang; Samol, Nancy; Szabova, Alexandra; Hagerman, Nancy; Hein, Elizabeth; Boat, Anne; Varughese, Anna; Kurth, Charles Dean; Willging, J Paul; Gunter, Joel B
2010-04-01
Behavior in response to distressful events during outpatient pediatric surgery can contribute to postoperative maladaptive behaviors, such as temper tantrums, nightmares, bed-wetting, and attention seeking. Currently available perioperative behavioral assessment tools have limited utility in guiding interventions to ameliorate maladaptive behaviors because they cannot be used in real time, are only intended to be used during 1 phase of the experience (e.g., perioperative), or provide only a static assessment of the child (e.g., level of anxiety). A simple, reliable, real-time tool is needed to appropriately identify children and parents whose behaviors in response to distressful events at any point in the perioperative continuum could benefit from timely behavioral intervention. Our specific aims were to (1) refine the Perioperative Adult Child Behavioral Interaction Scale (PACBIS) to improve its reliability in identifying perioperative behaviors and (2) validate the refined PACBIS against several established instruments. The PACBIS was used to assess the perioperative behaviors of 89 children aged 3 to 12 years presenting for adenotonsillectomy and their parents. Assessments using the PACBIS were made during perioperative events likely to prove distressing to children and/or parents (perioperative measurement of blood pressure, induction of anesthesia, and removal of the IV catheter before discharge). Static measurements of perioperative anxiety and behavioral compliance during anesthetic induction were made using the modified Yale Preoperative Anxiety Scale and the Induction Compliance Checklist (ICC). Each event was videotaped for later scoring using the Child-Adult Medical Procedure Interaction Scale-Short Form (CAMPIS-SF) and Observational Scale of Behavioral Distress (OSBD). Interrater reliability using linear weighted kappa (kappa(w)) and multiple validations using Spearman correlation coefficients were analyzed. The PACBIS demonstrated good to excellent interrater reliability, with kappa(w) ranging from 0.62 to 0.94. The Child Coping and Child Distress subscores of the PACBIS demonstrated strong concurrent correlations with the modified Yale Preoperative Anxiety Scale, ICC, CAMPIS-SF, and OSBD. The Parent Positive subscore of the PACBIS correlated strongly with the CAMPIS-SF and OSBD, whereas the Parent Negative subscore showed significant correlation with the ICC. The PACBIS has strong construct and predictive validities. The PACBIS is a simple, easy to use, real-time instrument to evaluate perioperative behaviors of both children and parents. It has good to excellent interrater reliability and strong concurrent validity against currently accepted scales. The PACBIS offers a means to identify maladaptive child or parental behaviors in real time, making it possible to intervene to modify such behaviors in a timely fashion.
NASA Astrophysics Data System (ADS)
Antonenko, I.; Osinski, G. R.; Battler, M.; Beauchamp, M.; Cupelli, L.; Chanou, A.; Francis, R.; Mader, M. M.; Marion, C.; McCullough, E.; Pickersgill, A. E.; Preston, L. J.; Shankar, B.; Unrau, T.; Veillette, D.
2013-07-01
Remote robotic data provides different information than that obtained from immersion in the field. This significantly affects the geological situational awareness experienced by members of a mission control science team. In order to optimize science return from planetary robotic missions, these limitations must be understood and their effects mitigated to fully leverage the field experience of scientists at mission control.Results from a 13-day analogue deployment at the Mistastin Lake impact structure in Labrador, Canada suggest that scale, relief, geological detail, and time are intertwined issues that impact the mission control science team's effectiveness in interpreting the geology of an area. These issues are evaluated and several mitigation options are suggested. Scale was found to be difficult to interpret without the reference of known objects, even when numerical scale data were available. For this reason, embedding intuitive scale-indicating features into image data is recommended. Since relief is not conveyed in 2D images, both 3D data and observations from multiple angles are required. Furthermore, the 3D data must be observed in animation or as anaglyphs, since without such assistance much of the relief information in 3D data is not communicated. Geological detail may also be missed due to the time required to collect, analyze, and request data.We also suggest that these issues can be addressed, in part, by an improved understanding of the operational time costs and benefits of scientific data collection. Robotic activities operate on inherently slow time-scales. This fact needs to be embraced and accommodated. Instead of focusing too quickly on the details of a target of interest, thereby potentially minimizing science return, time should be allocated at first to more broad data collection at that target, including preliminary surveys, multiple observations from various vantage points, and progressively smaller scale of focus. This operational model more closely follows techniques employed by field geologists and is fundamental to the geologic interpretation of an area. Even so, an operational time cost/benefit analyses should be carefully considered in each situation, to determine when such comprehensive data collection would maximize the science return.Finally, it should be recognized that analogue deployments cannot faithfully model the time scales of robotic planetary missions. Analogue missions are limited by the difficulty and expense of fieldwork. Thus, analogue deployments should focus on smaller aspects of robotic missions and test components in a modular way (e.g., dropping communications constraints, limiting mission scope, focusing on a specific problem, spreading the mission over several field seasons, etc.).
Karakoç, Mehmet; Batmaz, İbrahim; Sariyildiz, Mustafa Akif; Yazmalar, Levent; Aydin, Abdülkadir; Em, Serda
2017-08-01
Patients with amputation need prosthesis to comfortably move around. One of the most important parts of a good prosthesis is the socket. Currently, the most commonly used method is the traditional socket manufacturing method, which involves manual work; however, computer-aided design/computer-aided manufacturing (CAD/CAM) is also being used in the recent years. The present study aimed to investigate the effects of sockets manufactured by traditional and CAD/CAM method on clinical characteristics and quality of life of patients with transtibial amputation. The study included 72 patients with transtibial amputation using prosthesis, 36 of whom had CAD/CAM prosthetic sockets (group 1) and 36 had traditional prosthetic sockets (group 2). Amputation reason, prosthesis lifetime, walking time and distance with prosthesis, pain-free walking time with prosthesis, production time of the prosthesis, and adaptation time to the prosthesis were questioned. Quality of life was assessed using the 36-item Short Form Health Survey questionnaire and the Trinity Amputation and Prosthesis Experience Scales. Walking time and distance and pain-free walking time with prosthesis were significantly better in group 1 than those in group 2. Furthermore, the prosthesis was applied in a significantly shorter time, and socket adaptation time was significantly shorter in group 1. Except emotional role limitation, all 36-item Short Form Healthy Survey questionnaire parameters were significantly better in group 1 than in group 2. Trinity Amputation and Prosthesis Experience Scales activity limitation scores of group 1 were lower, and Trinity Amputation and Prosthesis Experience Scales satisfaction with the prosthesis scores were higher than those in group 2. Our study demonstrated that the sockets manufactured by CAD/CAM methods yield better outcomes in quality of life of patients with transtibial amputation than the sockets manufactured by the traditional method.
NASA Astrophysics Data System (ADS)
Moradi, A.
2015-12-01
To properly model soil thermal performance in unsaturated porous media, for applications such as SBTES systems, knowledge of both soil hydraulic and thermal properties and how they change in space and time is needed. Knowledge obtained from pore scale to macroscopic scale studies can help us to better understand these systems and contribute to the state of knowledge which can then be translated to engineering applications in the field (i.e. implementation of SBTES systems at the field scale). One important thermal property that varies with soil water content, effective thermal conductivity, is oftentimes included in numerical models through the use of empirical relationships and simplified mathematical formulations developed based on experimental data obtained at either small laboratory or field scales. These models assume that there is local thermodynamic equilibrium between the air and water phases for a representative elementary volume. However, this assumption may not always be valid at the pore scale, thus questioning the validity of current modeling approaches. The purpose of this work is to evaluate the validity of the local thermodynamic equilibrium assumption as related to the effective thermal conductivity at pore scale. A numerical model based on the coupled Cahn-Hilliard and heat transfer equation was developed to solve for liquid flow and heat transfer through variably saturated porous media. In this model, the evolution of phases and the interfaces between phases are related to a functional form of the total free energy of the system. A unique solution for the system is obtained by solving the Navier-Stokes equation through free energy minimization. Preliminary results demonstrate that there is a correlation between soil temperature / degree of saturation and equivalent thermal conductivity / heat flux. Results also confirm the correlation between pressure differential magnitude and equilibrium time for multiphase flow to reach steady state conditions. Based on these results, the equivalent time for steady-state heat transfer is much larger than the equivalent time for steady-state multiphase flow for a given pressure differential. Moreover, the wetting phase flow and consequently heat transfer appear to be sensitive to contact angle and porosity of the domain.
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less
ERIC Educational Resources Information Center
Zhang, Xuesong; Dorn, Bradley
2012-01-01
Agile development has received increasing interest both in industry and academia due to its benefits in developing software quickly, meeting customer needs, and keeping pace with the rapidly changing requirements. However, agile practices and scrum in particular have been mainly tested in mid- to large-size projects. In this paper, we present…
ERIC Educational Resources Information Center
Nelson, Brian C.; Bowman, Cassie; Bowman, Judd
2017-01-01
Ask Dr. Discovery is an NSF-funded study addressing the need for ongoing, large-scale museum evaluation while investigating new ways to encourage museum visitors to engage deeply with museum content. To realize these aims, we are developing and implementing a mobile app with two parts: (1) a front-end virtual scientist called Dr. Discovery (Dr. D)…
A unified gas-kinetic scheme for continuum and rarefied flows IV: Full Boltzmann and model equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chang, E-mail: cliuaa@ust.hk; Xu, Kun, E-mail: makxu@ust.hk; Sun, Quanhua, E-mail: qsun@imech.ac.cn
Fluid dynamic equations are valid in their respective modeling scales, such as the particle mean free path scale of the Boltzmann equation and the hydrodynamic scale of the Navier–Stokes (NS) equations. With a variation of the modeling scales, theoretically there should have a continuous spectrum of fluid dynamic equations. Even though the Boltzmann equation is claimed to be valid in all scales, many Boltzmann solvers, including direct simulation Monte Carlo method, require the cell resolution to the order of particle mean free path scale. Therefore, they are still single scale methods. In order to study multiscale flow evolution efficiently, themore » dynamics in the computational fluid has to be changed with the scales. A direct modeling of flow physics with a changeable scale may become an appropriate approach. The unified gas-kinetic scheme (UGKS) is a direct modeling method in the mesh size scale, and its underlying flow physics depends on the resolution of the cell size relative to the particle mean free path. The cell size of UGKS is not limited by the particle mean free path. With the variation of the ratio between the numerical cell size and local particle mean free path, the UGKS recovers the flow dynamics from the particle transport and collision in the kinetic scale to the wave propagation in the hydrodynamic scale. The previous UGKS is mostly constructed from the evolution solution of kinetic model equations. Even though the UGKS is very accurate and effective in the low transition and continuum flow regimes with the time step being much larger than the particle mean free time, it still has space to develop more accurate flow solver in the region, where the time step is comparable with the local particle mean free time. In such a scale, there is dynamic difference from the full Boltzmann collision term and the model equations. This work is about the further development of the UGKS with the implementation of the full Boltzmann collision term in the region where it is needed. The central ingredient of the UGKS is the coupled treatment of particle transport and collision in the flux evaluation across a cell interface, where a continuous flow dynamics from kinetic to hydrodynamic scales is modeled. The newly developed UGKS has the asymptotic preserving (AP) property of recovering the NS solutions in the continuum flow regime, and the full Boltzmann solution in the rarefied regime. In the mostly unexplored transition regime, the UGKS itself provides a valuable tool for the non-equilibrium flow study. The mathematical properties of the scheme, such as stability, accuracy, and the asymptotic preserving, will be analyzed in this paper as well.« less
NASA Astrophysics Data System (ADS)
Sidle, R. C.
2013-12-01
Hydrologic, pedologic, and geomorphic processes are strongly interrelated and affected by scale. These interactions exert important controls on runoff generation, preferential flow, contaminant transport, surface erosion, and mass wasting. Measurement of hydraulic conductivity (K) and infiltration capacity at small scales generally underestimates these values for application at larger field, hillslope, or catchment scales. Both vertical and slope-parallel saturated flow and related contaminant transport are often influenced by interconnected networks of preferential flow paths, which are not captured in K measurements derived from soil cores. Using such K values in models may underestimate water and contaminant fluxes and runoff peaks. As shown in small-scale runoff plot studies, infiltration rates are typically lower than integrated infiltration across a hillslope or in headwater catchments. The resultant greater infiltration-excess overland flow in small plots compared to larger landscapes is attributed to the lack of preferential flow continuity; plot border effects; greater homogeneity of rainfall inputs, topography and soil physical properties; and magnified effects of hydrophobicity in small plots. At the hillslope scale, isolated areas with high infiltration capacity can greatly reduce surface runoff and surface erosion at the hillslope scale. These hydropedologic and hydrogeomorphic processes are also relevant to both occurrence and timing of landslides. The focus of many landslide studies has typically been either on small-scale vadose zone process and how these affect soil mechanical properties or on larger scale, more descriptive geomorphic studies. One of the issues in translating laboratory-based investigations on geotechnical behavior of soils to field scales where landslides occur is the characterization of large-scale hydrological processes and flow paths that occur in heterogeneous and anisotropic porous media. These processes are not only affected by the spatial distribution of soil physical properties and bioturbations, but also by geomorphic attributes. Interactions among preferential flow paths can induce rapid pore water pressure response within soil mantles and trigger landslides during storm peaks. Alternatively, in poorly developed and unstructured soils, infiltration occurs mainly through the soil matrix and a lag time exists between the rainfall peak and development of pore water pressures at depth. Deep, slow-moving mass failures are also strongly controlled by secondary porosity within the regolith with the timing of activation linked to recharge dynamics. As such, understanding both small and larger scale processes is needed to estimate geomorphic impacts, as well as streamflow generation and contaminant migration.
Advanced Algorithms for Local Routing Strategy on Complex Networks
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K.; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70–90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks. PMID:27434502
NASA Astrophysics Data System (ADS)
Barnes, Cris W.
DOE and NNSA are recognizing a mission need for flexible and reduced-cost product-based solutions to materials through accelerated qualification, certification, and assessment. The science challenge lies between the nanoscale of materials and the integral device scale, at the middle or ''mesoscale'' where interfaces, defects, and microstructure determine the performance of the materials over the lifecycle of the intended use. Time-dependent control of the processing, structure and properties of materials at this scale lies at the heart of qualifying and certifying additive manufactured parts; experimental data of high fidelity and high resolution are necessary to discover the right physical mechanisms to model and to validate and calibrate those reduced-order models in codes on Exascale computers. The scientific requirements to do this are aided by a revolution in coherent imaging of non-periodic features that can be combined with scattering off periodic structures. This drives the need to require a coherent x-ray source, brilliant and high repetition rate, of sufficiently high energy to see into and through the mesoscale. The Matter-Radiation Interactions in Extremes (MaRIE) Project is a proposal to build such a very-high-energy X-ray Free Electron Laser.
Advanced Algorithms for Local Routing Strategy on Complex Networks.
Lin, Benchuan; Chen, Bokui; Gao, Yachun; Tse, Chi K; Dong, Chuanfei; Miao, Lixin; Wang, Binghong
2016-01-01
Despite the significant improvement on network performance provided by global routing strategies, their applications are still limited to small-scale networks, due to the need for acquiring global information of the network which grows and changes rapidly with time. Local routing strategies, however, need much less local information, though their transmission efficiency and network capacity are much lower than that of global routing strategies. In view of this, three algorithms are proposed and a thorough investigation is conducted in this paper. These algorithms include a node duplication avoidance algorithm, a next-nearest-neighbor algorithm and a restrictive queue length algorithm. After applying them to typical local routing strategies, the critical generation rate of information packets Rc increases by over ten-fold and the average transmission time 〈T〉 decreases by 70-90 percent, both of which are key physical quantities to assess the efficiency of routing strategies on complex networks. More importantly, in comparison with global routing strategies, the improved local routing strategies can yield better network performance under certain circumstances. This is a revolutionary leap for communication networks, because local routing strategy enjoys great superiority over global routing strategy not only in terms of the reduction of computational expense, but also in terms of the flexibility of implementation, especially for large-scale networks.
Nanosystem self-assembly pathways discovered via all-atom multiscale analysis.
Pankavich, Stephen D; Ortoleva, Peter J
2012-07-26
We consider the self-assembly of composite structures from a group of nanocomponents, each consisting of particles within an N-atom system. Self-assembly pathways and rates for nanocomposites are derived via a multiscale analysis of the classical Liouville equation. From a reduced statistical framework, rigorous stochastic equations for population levels of beginning, intermediate, and final aggregates are also derived. It is shown that the definition of an assembly type is a self-consistency criterion that must strike a balance between precision and the need for population levels to be slowly varying relative to the time scale of atomic motion. The deductive multiscale approach is complemented by a qualitative notion of multicomponent association and the ensemble of exact atomic-level configurations consistent with them. In processes such as viral self-assembly from proteins and RNA or DNA, there are many possible intermediates, so that it is usually difficult to predict the most efficient assembly pathway. However, in the current study, rates of assembly of each possible intermediate can be predicted. This avoids the need, as in a phenomenological approach, for recalibration with each new application. The method accounts for the feedback across scales in space and time that is fundamental to nanosystem self-assembly. The theory has applications to bionanostructures, geomaterials, engineered composites, and nanocapsule therapeutic delivery systems.
From intrusive to oscillating thoughts.
Peirce, Anne Griswold
2007-10-01
This paper focused on the possibility that intrusive thoughts (ITs) are a form of an evolutionary, adaptive, and complex strategy to prepare for and resolve stressful life events through schema formation. Intrusive thoughts have been studied in relation to individual conditions, such as traumatic stress disorder and obsessive-compulsive disorder. They have also been documented in the average person experiencing everyday stress. In many descriptions of thought intrusion, it is accompanied by thought suppression. Several theories have been put forth to describe ITs, although none provides a satisfactory explanation as to whether ITs are a normal process, a normal process gone astray, or a sign of pathology. There is also no consistent view of the role that thought suppression plays in the process. I propose that thought intrusion and thought suppression may be better understood by examining them together as a complex and adaptive mechanism capable of escalating in times of need. The ability of a biological mechanism to scale up in times of need is one hallmark of a complex and adaptive system. Other hallmarks of complexity, including self-similarity across scales, sensitivity to initial conditions, presence of feedback loops, and system oscillation, are also discussed in this article. Finally, I propose that thought intrusion and thought suppression are better described together as an oscillatory cycle.
Satellite Imagery Production and Processing Using Apache Hadoop
NASA Astrophysics Data System (ADS)
Hill, D. V.; Werpy, J.
2011-12-01
The United States Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Center Land Science Research and Development (LSRD) project has devised a method to fulfill its processing needs for Essential Climate Variable (ECV) production from the Landsat archive using Apache Hadoop. Apache Hadoop is the distributed processing technology at the heart of many large-scale, processing solutions implemented at well-known companies such as Yahoo, Amazon, and Facebook. It is a proven framework and can be used to process petabytes of data on thousands of processors concurrently. It is a natural fit for producing satellite imagery and requires only a few simple modifications to serve the needs of science data processing. This presentation provides an invaluable learning opportunity and should be heard by anyone doing large scale image processing today. The session will cover a description of the problem space, evaluation of alternatives, feature set overview, configuration of Hadoop for satellite image processing, real-world performance results, tuning recommendations and finally challenges and ongoing activities. It will also present how the LSRD project built a 102 core processing cluster with no financial hardware investment and achieved ten times the initial daily throughput requirements with a full time staff of only one engineer. Satellite Imagery Production and Processing Using Apache Hadoop is presented by David V. Hill, Principal Software Architect for USGS LSRD.
NASA Astrophysics Data System (ADS)
2012-05-01
Education: Physics Education Networks meeting has global scale Competition: Competition seeks the next Brian Cox Experiment: New measurement of neutrino time-of-flight consistent with the speed of light Event: A day for all those who teach physics Conference: Students attend first Anglo-Japanese international science conference Celebration: Will 2015 be the 'Year of Light'? Teachers: Challenging our intuition in spectacular fashion: the fascinating world of quantum physics awaits Research: Science sharpens up sport Learning: Kittinger and Baumgartner: on a mission to the edge of space International: London International Youth Science Forum calls for leading young scientists Competition: Physics paralympian challenge needs inquisitive, analytical, artistic and eloquent pupils Forthcoming events
Bagattoni, Simone; D'Alessandro, Giovanni; Sadotti, Agnese; Alkhamis, Nadia; Piana, Gabriela
2018-01-01
Audiovisual distraction using video eyeglasses is useful in managing distress and reducing fear and anxiety in healthy children during dental treatments. To evaluate the effect of audiovisual distraction on behavior and self-reported pain of children with special healthcare needs (SHCN) without intellectual disability during dental restorations and its influence on the operator stress and the time of the appointment. This randomized controlled crossover trial comprised 48 children with SHCN requiring at least two dental restorations. One restoration was done wearing the video eyeglasses and one wearing conventional behavior management techniques. Subjective and objective pain was evaluated using the Faces Pain Scale - Revised (FPS-R) and the revised Face, Leg, Activity, Cry, and Consolability scale (r-FLACC). The operator stress using a VAS, the time of the appointment, and the child satisfaction were recorded. The use of video eyeglasses significantly reduced the operator stress. The bivariate analysis showed that the mean FPS-R score and the mean r-FLACC score were significantly lower using the video eyeglasses only during the second clinical session. Audiovisual distraction could be useful in managing distress in SHCN children without intellectual disability but cannot replace the conventional behavior management techniques. © 2017 BSPD, IAPD and John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...
2017-10-17
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less
The 2-MEV model: Constancy of adolescent environmental values within an 8-year time frame
NASA Astrophysics Data System (ADS)
Bogner, F. X.; Johnson, B.; Buxner, S.; Felix, L.
2015-08-01
The 2-MEV model is a widely used tool to monitor children's environmental perception by scoring individual values. Although the scale's validity has been confirmed repeatedly and independently as well as the scale is in usage within more than two dozen language units all over the world, longitudinal properties still need clarification. The purpose of the present study therefore was to validate the 2-MEV scale based on a large data basis of 10,676 children collected over an eight-year period. Cohorts of three different US states contributed to the sample by responding to a paper-and-pencil questionnaire within their pre-test initiatives in the context of field center programs. Since we used only the pre-program 2-MEV scale results (which is before participation in education programs), the data were clearly unspoiled by any follow-up interventions. The purpose of analysis was fourfold: First, to test and confirm the hypothesized factorized structure for the large data set and for the subsample of each of the three states. Second, to analyze the scoring pattern across the eight years' time range for both preservation and utilitarian preferences. Third, to investigate any age effects in the extracted factors. Finally, to extract suitable recommendations for educational implementation efforts.
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
NASA Astrophysics Data System (ADS)
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.
2018-01-01
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.