Nonlinearly driven harmonics of Alfvén modes
NASA Astrophysics Data System (ADS)
Zhang, B.; Breizman, B. N.; Zheng, L. J.; Berk, H. L.
2014-01-01
In order to study the leading order nonlinear magneto-hydrodynamic (MHD) harmonic response of a plasma in realistic geometry, the AEGIS code has been generalized to account for inhomogeneous source terms. These source terms are expressed in terms of the quadratic corrections that depend on the functional form of a linear MHD eigenmode, such as the Toroidal Alfvén Eigenmode. The solution of the resultant equation gives the second order harmonic response. Preliminary results are presented here.
10 CFR 960.3-1-5 - Basis for site evaluations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...
10 CFR 960.3-1-5 - Basis for site evaluations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...
10 CFR 960.3-1-5 - Basis for site evaluations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...
10 CFR 960.3-1-5 - Basis for site evaluations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...
10 CFR 960.3-1-5 - Basis for site evaluations.
Code of Federal Regulations, 2014 CFR
2014-01-01
... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genn Saji
2006-07-01
The term 'ultimate risk' is used here to describe the probabilities and radiological consequences that should be incorporated in siting, containment design and accident management of nuclear power plants for hypothetical accidents. It is closely related with the source terms specified in siting criteria which assures an adequate separation of radioactive inventories of the plants from the public, in the event of a hypothetical and severe accident situation. The author would like to point out that current source terms which are based on the information from the Windscale accident (1957) through TID-14844 are very outdated and do not incorporate lessonsmore » learned from either the Three Miles Island (TMI, 1979) nor Chernobyl accident (1986), two of the most severe accidents ever experienced. As a result of the observations of benign radionuclides released at TMI, the technical community in the US felt that a more realistic evaluation of severe reactor accident source terms was necessary. In this background, the 'source term research project' was organized in 1984 to respond to these challenges. Unfortunately, soon after the time of the final report from this project was released, the Chernobyl accident occurred. Due to the enormous consequences induced by then accident, the one time optimistic perspectives in establishing a more realistic source term were completely shattered. The Chernobyl accident, with its human death toll and dispersion of a large part of the fission fragments inventories into the environment, created a significant degradation in the public's acceptance of nuclear energy throughout the world. In spite of this, nuclear communities have been prudent in responding to the public's anxiety towards the ultimate safety of nuclear plants, since there still remained many unknown points revolving around the mechanism of the Chernobyl accident. In order to resolve some of these mysteries, the author has performed a scoping study of the dispersion and deposition mechanisms of fuel particles and fission fragments during the initial phase of the Chernobyl accident. Through this study, it is now possible to generally reconstruct the radiological consequences by using a dispersion calculation technique, combined with the meteorological data at the time of the accident and land contamination densities of {sup 137}Cs measured and reported around the Chernobyl area. Although it is challenging to incorporate lessons learned from the Chernobyl accident into the source term issues, the author has already developed an example of safety goals by incorporating the radiological consequences of the accident. The example provides safety goals by specifying source term releases in a graded approach in combination with probabilities, i.e. risks. The author believes that the future source term specification should be directly linked with safety goals. (author)« less
EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young
2003-02-27
Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation ofmore » the Korean concept of the LILW disposal project in the near future.« less
NASA Astrophysics Data System (ADS)
Anita, G.; Selva, J.; Laura, S.
2011-12-01
We develop a comprehensive and total probabilistic tsunami hazard assessment (TotPTHA), in which many different possible source types concur to the definition of the total tsunami hazard at given target sites. In a multi-hazard and multi-risk perspective, such an innovative approach allows, in principle, to consider all possible tsunamigenic sources, from seismic events, to slides, asteroids, volcanic eruptions, etc. In this respect, we also formally introduce and discuss the treatment of interaction/cascade effects in the TotPTHA analysis. We demonstrate how external triggering events may induce significant temporary variations in the tsunami hazard. Because of this, such effects should always be considered, at least in short-term applications, to obtain unbiased analyses. Finally, we prove the feasibility of the TotPTHA and of the treatment of interaction/cascade effects by applying this methodology to an ideal region with realistic characteristics (Neverland).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Jerden, James
2016-10-01
The potential release of radioactive material during a plant incident, referred to as the source term, is a vital design metric and will be a major focus of advanced reactor licensing. The U.S. Nuclear Regulatory Commission has stated an expectation for advanced reactor vendors to present a mechanistic assessment of the potential source term in their license applications. The mechanistic source term presents an opportunity for vendors to realistically assess the radiological consequences of an incident, and may allow reduced emergency planning zones and smaller plant sites. However, the development of a mechanistic source term for advanced reactors is notmore » without challenges, as there are often numerous phenomena impacting the transportation and retention of radionuclides. This project sought to evaluate U.S. capabilities regarding the mechanistic assessment of radionuclide release from core damage incidents at metal fueled, pool-type sodium fast reactors (SFRs). The purpose of the analysis was to identify, and prioritize, any gaps regarding computational tools or data necessary for the modeling of radionuclide transport and retention phenomena. To accomplish this task, a parallel-path analysis approach was utilized. One path, led by Argonne and Sandia National Laboratories, sought to perform a mechanistic source term assessment using available codes, data, and models, with the goal to identify gaps in the current knowledge base. The second path, performed by an independent contractor, performed sensitivity analyses to determine the importance of particular radionuclides and transport phenomena in regards to offsite consequences. The results of the two pathways were combined to prioritize gaps in current capabilities.« less
NASA Astrophysics Data System (ADS)
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
Two-micron Laser Atmospheric Wind Sounder (LAWS) pointing/tracking study
NASA Technical Reports Server (NTRS)
Manlief, Scott
1995-01-01
The objective of the study was to identify and model major sources of short-term pointing jitter for a free-flying, full performance 2 micron LAWS system and evaluate the impact of the short-term jitter on wind-measurement performance. A fast steering mirror controls system was designed for the short-term jitter compensation. The performance analysis showed that the short-term jitter performance of the controls system over the 5.2 msec round-trip time for a realistic spacecraft environment was = 0.3 micro rad, rms, within the specified value of less than 0.5 micro rad, rms, derived in a 2 micron LAWS System Study. Disturbance modes were defined for: (1) the Bearing and Power Transfer Assembly (BAPTA) scan bearing, (2) the spacecraft reaction wheel torques, and (3) the solar array drive torques. The scan bearing disturbance was found to be the greatest contributing noise source to the jitter performance. Disturbances from the fast steering mirror reaction torques and a boom-mounted cross-link antenna clocking were also considered but were judged to be small compared to the three principal disturbance sources above and were not included in the final controls analysis.
Binaural Processing of Multiple Sound Sources
2016-08-18
Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman
Source term evaluation model for high-level radioactive waste repository with decay chain build-up.
Chopra, Manish; Sunny, Faby; Oza, R B
2016-09-18
A source term model based on two-component leach flux concept is developed for a high-level radioactive waste repository. The long-lived radionuclides associated with high-level waste may give rise to the build-up of activity because of radioactive decay chains. The ingrowths of progeny are incorporated in the model using Bateman decay chain build-up equations. The model is applied to different radionuclides present in the high-level radioactive waste, which form a part of decay chains (4n to 4n + 3 series), and the activity of the parent and daughter radionuclides leaching out of the waste matrix is estimated. Two cases are considered: one when only parent is present initially in the waste and another where daughters are also initially present in the waste matrix. The incorporation of in situ production of daughter radionuclides in the source is important to carry out realistic estimates. It is shown that the inclusion of decay chain build-up is essential to avoid underestimation of the radiological impact assessment of the repository. The model can be a useful tool for evaluating the source term of the radionuclide transport models used for the radiological impact assessment of high-level radioactive waste repositories.
Effect of conductor geometry on source localization: Implications for epilepsy studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlitt, H.; Heller, L.; Best, E.
1994-07-01
We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less
Finite-element solutions for geothermal systems
NASA Technical Reports Server (NTRS)
Chen, J. C.; Conel, J. E.
1977-01-01
Vector potential and scalar potential are used to formulate the governing equations for a single-component and single-phase geothermal system. By assuming an initial temperature field, the fluid velocity can be determined which, in turn, is used to calculate the convective heat transfer. The energy equation is then solved by considering convected heat as a distributed source. Using the resulting temperature to compute new source terms, the final results are obtained by iterations of the procedure. Finite-element methods are proposed for modeling of realistic geothermal systems; the advantages of such methods are discussed. The developed methodology is then applied to a sample problem. Favorable agreement is obtained by comparisons with a previous study.
NASA Astrophysics Data System (ADS)
Long, GuiLu; Qin, Wei; Yang, Zhe; Li, Jun-Lin
2018-06-01
The article Realistic interpretation of quantum mechanics and encounter-delayed-choice experiment, written by GuiLu Long, Wei Qin, Zhe Yang, and Jun-Lin Li, was originally published online without open access. After publication in volume 61, issue 3: 030311 the author decided to opt for Open Choice and to make the article an open access publication. Therefore, the copyright of the article has been changed to The Author(s) 2017 and the article is forthwith distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The original article has been corrected.
NASA Astrophysics Data System (ADS)
Ross, Z. E.; Ben-Zion, Y.; Zhu, L.
2015-02-01
We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.
Solar-powered irrigation systems. Technical progress report, July 1977--January 1978
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1978-02-28
Dispersed solar thermal power systems applied to farm irrigation energy needs are analyzed. The 17 western states, containing 84% of nationwide irrigated croplands and consuming 93% of nationwide irrigation energy, have been selected to determine were solar irrigation systems can compete most favorably with conventional energy sources. Financial analysis of farms, according to size and ownership, was accomplished to permit realistic comparative analyses of system lifetime costs. Market potential of optimized systems has been estimated for the 17-state region for near-term (1985) and intermediate-term (2000) applications. Technical, economic, and institutional factors bearing on penetration and capture of this market aremore » being identified.« less
Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia
NASA Astrophysics Data System (ADS)
Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G.; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S. H.; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne
2015-06-01
Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies.
Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia
Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G.; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S. H.; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne
2015-01-01
Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies. PMID:26119831
Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework
Kroes, Thomas; Post, Frits H.; Botha, Charl P.
2012-01-01
The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292
Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia.
Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S H; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne
2015-06-29
Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies.
A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.
Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie
2016-09-01
Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.
An evaluation of differences due to changing source directivity in room acoustic computer modeling
NASA Astrophysics Data System (ADS)
Vigeant, Michelle C.; Wang, Lily M.
2004-05-01
This project examines the effects of changing source directivity in room acoustic computer models on objective parameters and subjective perception. Acoustic parameters and auralizations calculated from omnidirectional versus directional sources were compared. Three realistic directional sources were used, measured in a limited number of octave bands from a piano, singing voice, and violin. A highly directional source that beams only within a sixteenth-tant of a sphere was also tested. Objectively, there were differences of 5% or more in reverberation time (RT) between the realistic directional and omnidirectional sources. Between the beaming directional and omnidirectional sources, differences in clarity were close to the just-noticeable-difference (jnd) criterion of 1 dB. Subjectively, participants had great difficulty distinguishing between the realistic and omnidirectional sources; very few could discern the differences in RTs. However, a larger percentage (32% vs 20%) could differentiate between the beaming and omnidirectional sources, as well as the respective differences in clarity. Further studies of the objective results from different beaming sources have been pursued. The direction of the beaming source in the room is changed, as well as the beamwidth. The objective results are analyzed to determine if differences fall within the jnd of sound-pressure level, RT, and clarity.
NASA Astrophysics Data System (ADS)
Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.
2017-12-01
There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation results from advanced methods such as variational inverse modeling, and Bayesian inference and stochastic sampling techniques. Future directions including other types of observations, other hydrocarbons being considered, and assessment of additional emission estimation methods will be discussed.
Evaluation of the communications impact of a low power arcjet thruster
NASA Technical Reports Server (NTRS)
Carney, Lynnette M.
1988-01-01
The interaction of a 1 kW arcjet thruster plume with a communications signal is evaluated. A two-parameter, source flow equation has been used to represent the far flow field distribution of the arcjet plume in a realistic spacecraft configuration. Modelling the plume as a plasma slab, the interaction of the plume with a 4 GHz communications signal is then evaluated in terms of signal attenuation and phase shift between transmitting and receiving antennas. Except for propagation paths which pass very near the arcjet source, the impacts to transmission appear to be negligible. The dominant signal loss mechanism is refraction of the beam rather than absorption losses due to collisions. However, significant reflection of the signal at the sharp vacuum-plasma boundary may also occur for propagation paths which pass near the source.
Leptogenesis from gravity waves in models of inflation.
Alexander, Stephon H S; Peskin, Michael E; Sheikh-Jabbari, M M
2006-03-03
We present a new mechanism for creating the observed cosmic matter-antimatter asymmetry which satisfies all three Sakharov conditions from one common thread, gravitational waves. We generate lepton number through the gravitational anomaly in the lepton number current. The source term comes from elliptically polarized gravity waves that are produced during inflation if the inflaton field contains a CP-odd component. The amount of matter asymmetry generated in our model can be of realistic size for the parameters within the range of some inflationary scenarios and grand unified theories.
Leptogenesis from Gravitational Waves and CP Violation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, S
2004-03-05
We present a new mechanism for creating the observed cosmic matter-antimatter asymmetry which satisfies all three Sakharov conditions from one common thread, gravitational waves. We generate lepton number through the gravitational anomaly in the lepton number current. The source term comes from elliptically polarizated gravity waves that are produced during inflation if the inflaton field contains a CP-odd component. In simple inflationary scenarios, the generated matter asymmetry is very small. We describe some special conditions in which our mechanism can give a matter asymmetry of realistic size.
NASA Astrophysics Data System (ADS)
Sharifian, Mohammad Kazem; Kesserwani, Georges; Hassanzadeh, Yousef
2018-05-01
This work extends a robust second-order Runge-Kutta Discontinuous Galerkin (RKDG2) method to solve the fully nonlinear and weakly dispersive flows, within a scope to simultaneously address accuracy, conservativeness, cost-efficiency and practical needs. The mathematical model governing such flows is based on a variant form of the Green-Naghdi (GN) equations decomposed as a hyperbolic shallow water system with an elliptic source term. Practical features of relevance (i.e. conservative modeling over irregular terrain with wetting and drying and local slope limiting) have been restored from an RKDG2 solver to the Nonlinear Shallow Water (NSW) equations, alongside new considerations to integrate elliptic source terms (i.e. via a fourth-order local discretization of the topography) and to enable local capturing of breaking waves (i.e. via adding a detector for switching off the dispersive terms). Numerical results are presented, demonstrating the overall capability of the proposed approach in achieving realistic prediction of nearshore wave processes involving both nonlinearity and dispersion effects within a single model.
Life-cycle energy impacts for adapting an urban water supply system to droughts.
Lam, Ka Leung; Stokes-Draut, Jennifer R; Horvath, Arpad; Lane, Joe L; Kenway, Steven J; Lant, Paul A
2017-12-15
In recent years, cities in some water stressed regions have explored alternative water sources such as seawater desalination and potable water recycling in spite of concerns over increasing energy consumption. In this study, we evaluate the current and future life-cycle energy impacts of four alternative water supply strategies introduced during a decade-long drought in South East Queensland (SEQ), Australia. These strategies were: seawater desalination, indirect potable water recycling, network integration, and rainwater tanks. Our work highlights the energy burden of alternative water supply strategies which added approximately 24% life-cycle energy use to the existing supply system (with surface water sources) in SEQ even for a current post-drought low utilisation status. Over half of this additional life-cycle energy use was from the centralised alternative supply strategies. Rainwater tanks contributed an estimated 3% to regional water supply, but added over 10% life-cycle energy use to the existing system. In the future scenario analysis, we compare the life-cycle energy use between "Normal", "Dry", "High water demand" and "Design capacity" scenarios. In the "Normal" scenario, a long-term low utilisation of the desalination system and the water recycling system has greatly reduced the energy burden of these centralised strategies to only 13%. In contrast, higher utilisation in the unlikely "Dry" and "Design capacity" scenarios add 86% and 140% to life-cycle energy use of the existing system respectively. In the "High water demand" scenario, a 20% increase in per capita water use over 20 years "consumes" more energy than is used by the four alternative strategies in the "Normal" scenario. This research provides insight for developing more realistic long-term scenarios to evaluate and compare life-cycle energy impacts of drought-adaptation infrastructure and regional decentralised water sources. Scenario building for life-cycle assessments of water supply systems should consider i) climate variability and, therefore, infrastructure utilisation rate, ii) potential under-utilisation for both installed centralised and decentralised sources, and iii) the potential energy penalty for operating infrastructure well below its design capacity (e.g., the operational energy intensity of the desalination system is three times higher at low utilisation rates). This study illustrates that evaluating the life-cycle energy use and intensity of these type of supply sources without considering their realistic long-term operating scenario(s) can potentially distort and overemphasise their energy implications. To other water stressed regions, this work shows that managing long-term water demand is also important, in addition to acknowledging the energy-intensive nature of some alternative water sources. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling of Nonlinear Beat Signals of TAE's
NASA Astrophysics Data System (ADS)
Zhang, Bo; Berk, Herbert; Breizman, Boris; Zheng, Linjin
2012-03-01
Experiments on Alcator C-Mod reveal Toroidal Alfven Eigenmodes (TAE) together with signals at various beat frequencies, including those at twice the mode frequency. The beat frequencies are sidebands driven by quadratic nonlinear terms in the MHD equations. These nonlinear sidebands have not yet been quantified by any existing codes. We extend the AEGIS code to capture nonlinear effects by treating the nonlinear terms as a driving source in the linear MHD solver. Our goal is to compute the spatial structure of the sidebands for realistic geometry and q-profile, which can be directly compared with experiment in order to interpret the phase contrast imaging diagnostic measurements and to enable the quantitative determination of the Alfven wave amplitude in the plasma core
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Vira, Julius; Bocquet, Marc; Sofiev, Mikhail; Saunier, Olivier
2011-06-01
In the event of an accidental atmospheric release of radionuclides from a nuclear power plant, accurate real-time forecasting of the activity concentrations of radionuclides is required by the decision makers for the preparation of adequate countermeasures. The accuracy of the forecast plume is highly dependent on the source term estimation. On several academic test cases, including real data, inverse modelling and data assimilation techniques were proven to help in the assessment of the source term. In this paper, a semi-automatic method is proposed for the sequential reconstruction of the plume, by implementing a sequential data assimilation algorithm based on inverse modelling, with a care to develop realistic methods for operational risk agencies. The performance of the assimilation scheme has been assessed through the intercomparison between French and Finnish frameworks. Two dispersion models have been used: Polair3D and Silam developed in two different research centres. Different release locations, as well as different meteorological situations are tested. The existing and newly planned surveillance networks are used and realistically large multiplicative observational errors are assumed. The inverse modelling scheme accounts for strong error bias encountered with such errors. The efficiency of the data assimilation system is tested via statistical indicators. For France and Finland, the average performance of the data assimilation system is strong. However there are outlying situations where the inversion fails because of a too poor observability. In addition, in the case where the power plant responsible for the accidental release is not known, robust statistical tools are developed and tested to discriminate candidate release sites.
2.5D Modeling of TEM Data Applied to Hidrogeological Studies in PARANÁ Basin, Brazil
NASA Astrophysics Data System (ADS)
Bortolozo, C. A.; Porsani, J. L.; Santos, F. M.
2013-12-01
The transient electromagnetic method (TEM) is used all over the world and has shown great potential in hydrological, hazardous waste site characterization, mineral exploration, general geological mapping, and geophysical reconnaissance. However, the behavior of TEM fields are very complex and is not yet fully understood. Forward modeling is one of the most common and effective methods to understand the physical behavior and significance of the electromagnetics responses of a TEM sounding. Until now, there are a limited number of solutions for the 2D forward problem for TEM. More rare are the descriptions of a three-component response of a 3D source over 2D earth, which is the so-called 2.5D. The 2.5D approach is more realistic than the conventional 2D source previous used, once normally the source cannot be realistic represented for a 2D approximation (normally source are square loops). At present the 2.5D model represents the only way of interpreting TEM data in terms of a complex earth, due to the prohibitive amount of computer time and storage required for a full 3D model. In this work we developed a TEM modeling program for understanding the different responses and how the magnetic and electric fields, produced by loop sources at air-earth interface, behave in different geoelectrical distributions. The models used in the examples are proposed focusing hydrogeological studies, once the main objective of this work is for detecting different kinds of aquifers in Paraná sedimentary basin, in São Paulo State - Brazil. The program was developed in MATLAB, a widespread language very common in the scientific community.
Local tsunamis and earthquake source parameters
Geist, Eric L.; Dmowska, Renata; Saltzman, Barry
1999-01-01
This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.
Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics
Petrov, Yury
2012-01-01
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunett, Acacia J.; Bucknor, Matthew; Grabaskas, David
A vital component of the U.S. reactor licensing process is an integrated safety analysis in which a source term representing the release of radionuclides during normal operation and accident sequences is analyzed. Historically, source term analyses have utilized bounding, deterministic assumptions regarding radionuclide release. However, advancements in technical capabilities and the knowledge state have enabled the development of more realistic and best-estimate retention and release models such that a mechanistic source term assessment can be expected to be a required component of future licensing of advanced reactors. Recently, as part of a Regulatory Technology Development Plan effort for sodium cooledmore » fast reactors (SFRs), Argonne National Laboratory has investigated the current state of knowledge of potential source terms in an SFR via an extensive review of previous domestic experiments, accidents, and operation. As part of this work, the significant sources and transport processes of radionuclides in an SFR have been identified and characterized. This effort examines all stages of release and source term evolution, beginning with release from the fuel pin and ending with retention in containment. Radionuclide sources considered in this effort include releases originating both in-vessel (e.g. in-core fuel, primary sodium, cover gas cleanup system, etc.) and ex-vessel (e.g. spent fuel storage, handling, and movement). Releases resulting from a primary sodium fire are also considered as a potential source. For each release group, dominant transport phenomena are identified and qualitatively discussed. The key product of this effort was the development of concise, inclusive diagrams that illustrate the release and retention mechanisms at a high level, where unique schematics have been developed for in-vessel, ex-vessel and sodium fire releases. This review effort has also found that despite the substantial range of phenomena affecting radionuclide release, the current state of knowledge is extensive, and in most areas may be sufficient. Several knowledge gaps were identified, such as uncertainty in release from molten fuel and availability of thermodynamic data for lanthanides and actinides in liquid sodium. However, the overall findings suggest that high retention rates can be expected within the fuel and primary sodium for all radionuclides other than noble gases.« less
Nonlinear synthesis of infrasound propagation through an inhomogeneous, absorbing atmosphere.
de Groot-Hedlin, C D
2012-08-01
An accurate and efficient method to predict infrasound amplitudes from large explosions in the atmosphere is required for diverse source types, including bolides, volcanic eruptions, and nuclear and chemical explosions. A finite-difference, time-domain approach is developed to solve a set of nonlinear fluid dynamic equations for total pressure, temperature, and density fields rather than acoustic perturbations. Three key features for the purpose of synthesizing nonlinear infrasound propagation in realistic media are that it includes gravitational terms, it allows for acoustic absorption, including molecular vibration losses at frequencies well below the molecular vibration frequencies, and the environmental models are constrained to have axial symmetry, allowing a three-dimensional simulation to be reduced to two dimensions. Numerical experiments are performed to assess the algorithm's accuracy and the effect of source amplitudes and atmospheric variability on infrasound waveforms and shock formation. Results show that infrasound waveforms steepen and their associated spectra are shifted to higher frequencies for nonlinear sources, leading to enhanced infrasound attenuation. Results also indicate that nonlinear infrasound amplitudes depend strongly on atmospheric temperature and pressure variations. The solution for total field variables and insertion of gravitational terms also allows for the computation of other disturbances generated by explosions, including gravity waves.
Environmental performance of bio-based and biodegradable plastics: the road ahead.
Lambert, Scott; Wagner, Martin
2017-11-13
Future plastic materials will be very different from those that are used today. The increasing importance of sustainability promotes the development of bio-based and biodegradable polymers, sometimes misleadingly referred to as 'bioplastics'. Because both terms imply "green" sources and "clean" removal, this paper aims at critically discussing the sometimes-conflicting terminology as well as renewable sources with a special focus on the degradation of these polymers in natural environments. With regard to the former we review innovations in feedstock development (e.g. microalgae and food wastes). In terms of the latter, we highlight the effects that polymer structure, additives, and environmental variables have on plastic biodegradability. We argue that the 'biodegradable' end-product does not necessarily degrade once emitted to the environment because chemical additives used to make them fit for purpose will increase the longevity. In the future, this trend may continue as the plastics industry also is expected to be a major user of nanocomposites. Overall, there is a need to assess the performance of polymer innovations in terms of their biodegradability especially under realistic waste management and environmental conditions, to avoid the unwanted release of plastic degradation products in receiving environments.
NASA Astrophysics Data System (ADS)
Ding, Lei; Lai, Yuan; He, Bin
2005-01-01
It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Importance of Including Topography in Numerical Simulations of Venus' Atmospheric Circulation
NASA Astrophysics Data System (ADS)
Parish, H. F.; Schubert, G.; Lebonnois, S.; Covey, C. C.; Walterscheid, R. L.; Grossman, A.
2012-12-01
Venus' atmosphere is characterized by strong superrotation, in which the wind velocities at cloud heights are around 60 times faster than the surface rotation rate. The reasons for this strong superrotation are still not well understood. Since the surface of the planet is both a source and sink of atmospheric angular momentum it is important to understand and properly account for the interactions at the surface-atmosphere boundary. A key aspect of the surface-atmosphere interaction is the topography. Topography has been introduced into different general circulation models (GCMs) of Venus' atmosphere, producing significant, but widely varying effects on the atmospheric circulation. The reasons for the inconsistencies among model results are not well known, but our studies suggest they might be related to the influences of different dynamical cores. In our recent study, we have analyzed the angular momentum budget for two Venus GCMs, the Venus Community Atmosphere model (Venus CAM) and the Laboratoire de Meteorologie Dynamique (LMD) Venus GCM. Because of Venus' low magnitude surface winds, surface friction alone supplies only a relatively weak angular momentum forcing to the atmosphere. We find that if surface friction is introduced without including surface topography, the angular momentum balance of the atmosphere may be dominated by effects such as numerical diffusion, a sponge layer, or other numerical residuals that are generally included in all GCMs, and can themselves be sources of angular momentum. However, we find the mountain torque associated with realistic Venus surface topography supplies a much larger source of angular momentum than the surface friction, and dominates nonphysical numerical terms. (A similar effect occurs for rapidly rotating planets like Earth, but in this case numerical errors in the angular momentum budget are relatively small even in the absence of mountain torque). Even if surface friction dominates numerical terms in the angular momentum budgets of simulations without realistic topography, it must be remembered that there are no observational constraints on model parameterizations of the real surface friction on Venus. It is essential for a planet such as Venus, for which surface friction alone supplies only weak angular momentum forcing, to include surface topography to generate realistic forcing of angular momentum and avoid the influences of numerical artifacts, which can be significant. Venus' topography, as mapped using measurements from the Magellan mission, shows significant hemispheric asymmetry. In this work we examine the impact of this asymmetry using simulations of Venus' circulation with and without topography, within the latest version of the Venus CAM GCM.
Personal assistance services in the workplace: A literature review.
Dowler, Denetta L; Solovieva, Tatiana I; Walls, Richard T
2011-10-01
Personal assistance services (PAS) can be valuable adjuncts to the complement of accommodations that support workers with disabilities. This literature review explored the professional literature on the use of PAS in the workplace. Bibliographic sources were used to locate relevant research studies on the use of PAS in the workplace. The studies in this review used both qualitative and quantitative methods to identify current definitions of work-related and personal care-related PAS, agency-directed versus consumer-directed PAS, long-term and short-term funding issues, development of PAS policy, and barriers to successful implementation of PAS. The studies uncovered issues related to (a) recruiting, training, and retaining personal assistants, (b) employer concerns, (c) costs and benefits of workplace PAS, (d) wages and incentives for personal assistants, and (e) sources for financing PAS as a workplace accommodation. The findings reveal the value and benefits of effective PAS on the job. PAS can lead to successful employment of people with disabilities when other accommodations cannot provide adequate workplace support. Additionally, the evolution of workplace PAS is dependent on development of realistic PAS policy and funding options. Published by Elsevier Inc.
Methods for nuclear air-cleaning-system accident-consequence assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrae, R.W.; Bolstad, J.W.; Gregory, W.S.
1982-01-01
This paper describes a multilaboratory research program that is directed toward addressing many questions that analysts face when performing air cleaning accident consequence assessments. The program involves developing analytical tools and supportive experimental data that will be useful in making more realistic assessments of accident source terms within and up to the atmospheric boundaries of nuclear fuel cycle facilities. The types of accidents considered in this study includes fires, explosions, spills, tornadoes, criticalities, and equipment failures. The main focus of the program is developing an accident analysis handbook (AAH). We will describe the contents of the AAH, which include descriptionsmore » of selected nuclear fuel cycle facilities, process unit operations, source-term development, and accident consequence analyses. Three computer codes designed to predict gas and material propagation through facility air cleaning systems are described. These computer codes address accidents involving fires (FIRAC), explosions (EXPAC), and tornadoes (TORAC). The handbook relies on many illustrative examples to show the analyst how to approach accident consequence assessments. We will use the FIRAC code and a hypothetical fire scenario to illustrate the accident analysis capability.« less
Chiavassa, S; Lemosquet, A; Aubineau-Lanièce, I; de Carlan, L; Clairand, I; Ferrer, L; Bardiès, M; Franck, D; Zankl, M
2005-01-01
This paper aims at comparing dosimetric assessments performed with three Monte Carlo codes: EGS4, MCNP4c2 and MCNPX2.5e, using a realistic voxel phantom, namely the Zubal phantom, in two configurations of exposure. The first one deals with an external irradiation corresponding to the example of a radiological accident. The results are obtained using the EGS4 and the MCNP4c2 codes and expressed in terms of the mean absorbed dose (in Gy per source particle) for brain, lungs, liver and spleen. The second one deals with an internal exposure corresponding to the treatment of a medullary thyroid cancer by 131I-labelled radiopharmaceutical. The results are obtained by EGS4 and MCNPX2.5e and compared in terms of S-values (expressed in mGy per kBq and per hour) for liver, kidney, whole body and thyroid. The results of these two studies are presented and differences between the codes are analysed and discussed.
Wennberg, Richard; Cheyne, Douglas
2014-05-01
To assess the reliability of MEG source imaging (MSI) of anterior temporal spikes through detailed analysis of the localization and orientation of source solutions obtained for a large number of spikes that were separately confirmed by intracranial EEG to be focally generated within a single, well-characterized spike focus. MSI was performed on 64 identical right anterior temporal spikes from an anterolateral temporal neocortical spike focus. The effects of different volume conductors (sphere and realistic head model), removal of noise with low frequency filters (LFFs) and averaging multiple spikes were assessed in terms of the reliability of the source solutions. MSI of single spikes resulted in scattered dipole source solutions that showed reasonable reliability for localization at the lobar level, but only for solutions with a goodness-of-fit exceeding 80% using a LFF of 3 Hz. Reliability at a finer level of intralobar localization was limited. Spike averaging significantly improved the reliability of source solutions and averaging 8 or more spikes reduced dependency on goodness-of-fit and data filtering. MSI performed on topographically identical individual spikes from an intracranially defined classical anterior temporal lobe spike focus was limited by low reliability (i.e., scattered source solutions) in terms of fine, sublobar localization within the ipsilateral temporal lobe. Spike averaging significantly improved reliability. MSI performed on individual anterior temporal spikes is limited by low reliability. Reduction of background noise through spike averaging significantly improves the reliability of MSI solutions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Restrepo, Doriam; Bielak, Jacobo; Serrano, Ricardo; Gómez, Juan; Jaramillo, Juan
2016-03-01
This paper presents a set of deterministic 3-D ground motion simulations for the greater metropolitan area of Medellín in the Aburrá Valley, an earthquake-prone region of the Colombian Andes that exhibits moderate-to-strong topographic irregularities. We created the velocity model of the Aburrá Valley region (version 1) using the geological structures as a basis for determining the shear wave velocity. The irregular surficial topography is considered by means of a fictitious domain strategy. The simulations cover a 50 × 50 × 25 km3 volume, and four Mw = 5 rupture scenarios along a segment of the Romeral fault, a significant source of seismic activity in Colombia. In order to examine the sensitivity of ground motion to the irregular topography and the 3-D effects of the valley, each earthquake scenario was simulated with three different models: (i) realistic 3-D velocity structure plus realistic topography, (ii) realistic 3-D velocity structure without topography, and (iii) homogeneous half-space with realistic topography. Our results show how surface topography affects the ground response. In particular, our findings highlight the importance of the combined interaction between source-effects, source-directivity, focusing, soft-soil conditions, and 3-D topography. We provide quantitative evidence of this interaction and show that topographic amplification factors can be as high as 500 per cent at some locations. In other areas within the valley, the topographic effects result in relative reductions, but these lie in the 0-150 per cent range.
Screening for Cellulase Encoding Clones in Metagenomic Libraries.
Ilmberger, Nele; Streit, Wolfgang R
2017-01-01
For modern biotechnology there is a steady need to identify novel enzymes. In biotechnological applications, however, enzymes often must function under extreme and nonnatural conditions (i.e., in the presence of solvents, high temperature and/or at extreme pH values). Cellulases have many industrial applications from the generation of bioethanol, a realistic long-term energy source, to the finishing of textiles. These industrial processes require cellulolytic activity under a wide range of pH, temperature, and ionic conditions, and they are usually carried out by mixtures of cellulases. Investigation of the broad diversity of cellulolytic enzymes involved in the natural degradation of cellulose is necessary for optimizing these processes.
Computation of nonlinear ultrasound fields using a linearized contrast source method.
Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A
2013-08-01
Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.
Modeling TAE Response To Nonlinear Drives
NASA Astrophysics Data System (ADS)
Zhang, Bo; Berk, Herbert; Breizman, Boris; Zheng, Linjin
2012-10-01
Experiment has detected the Toroidal Alfven Eigenmodes (TAE) with signals at twice the eigenfrequency.These harmonic modes arise from the second order perturbation in amplitude of the MHD equation for the linear modes that are driven the energetic particle free energy. The structure of TAE in realistic geometry can be calculated by generalizing the linear numerical solver (AEGIS package). We have have inserted all the nonlinear MHD source terms, where are quadratic in the linear amplitudes, into AEGIS code. We then invert the linear MHD equation at the second harmonic frequency. The ratio of amplitudes of the first and second harmonic terms are used to determine the internal field amplitude. The spatial structure of energy and density distribution are investigated. The results can be directly employed to compare with experiments and determine the Alfven wave amplitude in the plasma region.
Marsden, O; Bogey, C; Bailly, C
2014-03-01
The feasibility of using numerical simulation of fluid dynamics equations for the detailed description of long-range infrasound propagation in the atmosphere is investigated. The two dimensional (2D) Navier Stokes equations are solved via high fidelity spatial finite differences and Runge-Kutta time integration, coupled with a shock-capturing filter procedure allowing large amplitudes to be studied. The accuracy of acoustic prediction over long distances with this approach is first assessed in the linear regime thanks to two test cases featuring an acoustic source placed above a reflective ground in a homogeneous and weakly inhomogeneous medium, solved for a range of grid resolutions. An atmospheric model which can account for realistic features affecting acoustic propagation is then described. A 2D study of the effect of source amplitude on signals recorded at ground level at varying distances from the source is carried out. Modifications both in terms of waveforms and arrival times are described.
NASA Astrophysics Data System (ADS)
Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.
2014-02-01
Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.
Toward developing more realistic groundwater models using big data
NASA Astrophysics Data System (ADS)
Vahdat Aboueshagh, H.; Tsai, F. T. C.; Bhatta, D.; Paudel, K.
2017-12-01
Rich geological data is the backbone of developing realistic groundwater models for groundwater resources management. However, constructing realistic groundwater models can be challenging due to inconsistency between different sources of geological, hydrogeological and geophysical data and difficulty in processing big data to characterize the subsurface environment. This study develops a framework to utilize a big geological dataset to create a groundwater model for the Chicot Aquifer in the southwestern Louisiana, which borders on the Gulf of Mexico at south. The Chicot Aquifer is the principal source of fresh water in southwest Louisiana, underlying an area of about 9,000 square miles. Agriculture is the largest groundwater consumer in this region and overpumping has caused significant groundwater head decline and saltwater intrusion from the Gulf and deep formations. A hydrostratigraphy model was constructed using around 29,000 electrical logs and drillers' logs as well as screen lengths of pumping wells through a natural neighbor interpolation method. These sources of information have different weights in terms of accuracy and trustworthy. A data prioritization procedure was developed to filter untrustworthy log information, eliminate redundant data, and establish consensus of various lithological information. The constructed hydrostratigraphy model shows 40% sand facies, which is consistent with the well log data. The hydrostratigraphy model confirms outcrop areas of the Chicot Aquifer in the north of the study region. The aquifer sand formation is thinning eastward to merge into Atchafalaya River alluvial aquifer and coalesces to the underlying Evangeline aquifer. A grid generator was used to convert the hydrostratigraphy model into a MODFLOW grid with 57 layers. A Chicot groundwater model was constructed using the available hydrologic and hydrogeological data for 2004-2015. Pumping rates for irrigation wells were estimated using the crop type and acreage data from the USDACropScape. Groundwater level data obtained from the USGS were used to determine the model boundary and initial conditions. Recharge rates were approximated based on surficial lithology and rainfall data. The Chicot aquifer model will be used to understand groundwater availability in southwest Louisiana.
Fletcher, Adam; Jamal, Farah; Moore, Graham; Evans, Rhiannon E.; Murphy, Simon; Bonell, Chris
2016-01-01
The integration of realist evaluation principles within randomised controlled trials (‘realist RCTs’) enables evaluations of complex interventions to answer questions about what works, for whom and under what circumstances. This allows evaluators to better develop and refine mid-level programme theories. However, this is only one phase in the process of developing and evaluating complex interventions. We describe and exemplify how social scientists can integrate realist principles across all phases of the Medical Research Council framework. Intervention development, modelling, and feasibility and pilot studies need to theorise the contextual conditions necessary for intervention mechanisms to be activated. Where interventions are scaled up and translated into routine practice, realist principles also have much to offer in facilitating knowledge about longer-term sustainability, benefits and harms. Integrating a realist approach across all phases of complex intervention science is vital for considering the feasibility and likely effects of interventions for different localities and population subgroups. PMID:27478401
Seismic Waves, 4th order accurate
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-08-16
SW4 is a program for simulating seismic wave propagation on parallel computers. SW4 colves the seismic wave equations in Cartesian corrdinates. It is therefore appropriate for regional simulations, where the curvature of the earth can be neglected. SW4 implements a free surface boundary condition on a realistic topography, absorbing super-grid conditions on the far-field boundaries, and a kinematic source model consisting of point force and/or point moment tensor source terms. SW4 supports a fully 3-D heterogeneous material model that can be specified in several formats. SW4 can output synthetic seismograms in an ASCII test format, or in the SAC finarymore » format. It can also present simulation information as GMT scripts, whixh can be used to create annotated maps. Furthermore, SW4 can output the solution as well as the material model along 2-D grid planes.« less
NASA Astrophysics Data System (ADS)
Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan
2016-03-01
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Tang, M X; Zhang, Y Y; E, J C; Luo, S N
2018-05-01
Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic-plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of the diffraction patterns is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, M. X.; Zhang, Y. Y.; E, J. C.
Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic–plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of themore » diffraction patterns is discussed.« less
Hope in Janusz Korczak's Pedagogy of Realistic Idealism
ERIC Educational Resources Information Center
Silverman, Marc
2017-01-01
This article explores the approach of "Realistic Idealism" to moral education developed by the humanist-progressive moral educator Janusz Korczak, and the role hope plays in it. This pair of terms seems to be an oxymoron. However, their employment is intentional and the article will demonstrate their dialectical interdependence:…
The effectiveness and cost-effectiveness of shared care: protocol for a realist review.
Hardwick, Rebecca; Pearson, Mark; Byng, Richard; Anderson, Rob
2013-02-12
Shared care (an enhanced information exchange over and above routine outpatient letters) is commonly used to improve care coordination and communication between a specialist and primary care services for people with long-term conditions. Evidence of the effectiveness and cost-effectiveness of shared care is mixed. Informed decision-making for targeting shared care requires a greater understanding of how it works, for whom it works, in what contexts and why. This protocol outlines how realist review methods can be used to synthesise evidence on shared care for long-term conditions.A further aim of the review is to explore economic evaluations of shared care. Economic evaluations are difficult to synthesise due to problems in accounting for contextual differences that impact on resource use and opportunity costs. Realist review methods have been suggested as a way to overcome some of these issues, so this review will also assess whether realist review methods are amenable to synthesising economic evidence. Database and web searching will be carried out in order to find relevant evidence to develop and test programme theories about how shared care works. The review will have two phases. Phase 1 will concentrate on the contextual conditions and mechanisms that influence how shared care works, in order to develop programme theories, which partially explain how it works. Phase 2 will focus on testing these programme theories. A Project Reference Group made up of health service professionals and people with actual experience of long-term conditions will be used to ground the study in real-life experience. Review findings will be disseminated through local and sub-national networks for integrated care and long-term conditions. This realist review will explore why and for whom shared care works, in order to support decision-makers working to improve the effectiveness of care for people outside hospital. The development of realist review methods to take into account cost and cost-effectiveness evidence is particularly innovative and challenging, and if successful will offer a new approach to synthesising economic evidence. This systematic review protocol is registered on the PROSPERO database (registration number: CRD42012002842).
Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M
2017-10-01
Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
NASA Astrophysics Data System (ADS)
Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.
2017-10-01
Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
Experiments on Linguistically-Based Term Associations.
ERIC Educational Resources Information Center
Ruge, Gerda
1992-01-01
Describes the hyperterm system REALIST (Retrieval Aids by Linguistics and Statistics) with emphasis on its semantic component, which generates term relations from free-text input. Experiments with various similarity measures are discussed, and the quality of the associated terms is evaluated using term recall and term precision measures. (22…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wickliff, D.S.; Solomon, D.K.; Farrow, N.D.
Solid Waste Storage Area (SWSA) 5 is known to be a significant source of contaminants, especially tritium ({sup 3}H), to the White Oak Creek (WOC) watershed. For example, Solomon et al. (1991) estimated the total {sup 3}H discharge in Melton Branch (most of which originates in SWSA 5) for the 1988 water year to be 1210 Ci. A critical issue for making decisions concerning remedial actions at SWSA 5 is knowing whether the annual contaminant discharge is increasing or decreasing. Because (1) the magnitude of the annual contaminant discharge is highly correlated to the amount of annual precipitation (Solomon etmore » al., 1991) and (2) a significant lag may exist between the time of peak contaminant release from primary sources (i.e., waste trenches) and the time of peak discharge into streams, short-term stream monitoring by itself is not sufficient for predicting future contaminant discharges. In this study we use {sup 3}H to examine the link between contaminant release from primary waste sources and contaminant discharge into streams. By understanding and quantifying subsurface transport processes, realistic predictions of future contaminant discharge, along with an evaluation of the effectiveness of remedial action alternatives, will be possible. The objectives of this study are (1) to characterize the subsurface movement of contaminants (primarily {sup 3}H) with an emphasis on the effects of matrix diffusion; (2) to determine the relative strength of primary vs secondary sources; and (3) to establish a methodology capable of determining whether the {sup 3}H discharge from SWSA 5 to streams is increasing or decreasing.« less
Preliminary investigation of processes that affect source term identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wickliff, D.S.; Solomon, D.K.; Farrow, N.D.
Solid Waste Storage Area (SWSA) 5 is known to be a significant source of contaminants, especially tritium ({sup 3}H), to the White Oak Creek (WOC) watershed. For example, Solomon et al. (1991) estimated the total {sup 3}H discharge in Melton Branch (most of which originates in SWSA 5) for the 1988 water year to be 1210 Ci. A critical issue for making decisions concerning remedial actions at SWSA 5 is knowing whether the annual contaminant discharge is increasing or decreasing. Because (1) the magnitude of the annual contaminant discharge is highly correlated to the amount of annual precipitation (Solomon etmore » al., 1991) and (2) a significant lag may exist between the time of peak contaminant release from primary sources (i.e., waste trenches) and the time of peak discharge into streams, short-term stream monitoring by itself is not sufficient for predicting future contaminant discharges. In this study we use {sup 3}H to examine the link between contaminant release from primary waste sources and contaminant discharge into streams. By understanding and quantifying subsurface transport processes, realistic predictions of future contaminant discharge, along with an evaluation of the effectiveness of remedial action alternatives, will be possible. The objectives of this study are (1) to characterize the subsurface movement of contaminants (primarily {sup 3}H) with an emphasis on the effects of matrix diffusion; (2) to determine the relative strength of primary vs secondary sources; and (3) to establish a methodology capable of determining whether the {sup 3}H discharge from SWSA 5 to streams is increasing or decreasing.« less
An Improved Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.
2000-01-01
A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.
Towards a realistic astrophysical interpretation of the gamma-ray Galactic center excess
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaggero, Daniele; Urbano, Alfredo; Valli, Mauro
2015-12-01
A spherical-symmetric gamma-ray emission from (the inner region of the Galaxy (at least up to roughly 10° in latitude and longitude)) has been recently identified in Fermi-LAT data, and initially associated to dark matter particle annihilations. Guided by the evidence for a high gas density in the inner kpc of the Galaxy correlated with a very large Supernova rate, and hence with ongoing cosmic-ray acceleration, we investigate instead the possibility of addressing this excess in terms of ordinary cosmic-ray sources and standard steady-state diffusion. We (alter the source term, and consistently the correlated gamma-ray emissions, in the context of amore » template-fitting analysis. We focus on a region of interest (ROI) defined as: |l| < 20°; 2° < |b| < 20°, with l and b the Galactic longitude and latitude coordinates.) We analyze in detail the overall goodness of the fit of our framework, and perform a detailed direct comparison against data examining profiles in different directions. Remarkably, the test statistic of the fit related to our scenario turns out to be as good as the Dark Matter one in the ROI here considered.« less
Feigley, Charles E; Do, Thanh H; Khan, Jamil; Lee, Emily; Schnaufer, Nicholas D; Salzberg, Deborah C
2011-05-01
Computational fluid dynamics (CFD) is used increasingly to simulate the distribution of airborne contaminants in enclosed spaces for exposure assessment and control, but the importance of realistic boundary conditions is often not fully appreciated. In a workroom for manufacturing capacitors, full-shift samples for isoamyl acetate (IAA) were collected for 3 days at 16 locations, and velocities were measured at supply grills and at various points near the source. Then, velocity and concentration fields were simulated by 3-dimensional steady-state CFD using 295K tetrahedral cells, the k-ε turbulence model, standard wall function, and convergence criteria of 10(-6) for all scalars. Here, we demonstrate the need to represent boundary conditions accurately, especially emission characteristics at the contaminant source, and to obtain good agreement between observations and CFD results. Emission rates for each day were determined from six concentrations measured in the near field and one upwind using an IAA mass balance. The emission was initially represented as undiluted IAA vapor, but the concentrations estimated using CFD differed greatly from the measured concentrations. A second set of simulations was performed using the same IAA emission rates but a more realistic representation of the source. This yielded good agreement with measured values. Paying particular attention to the region with highest worker exposure potential-within 1.3 m of the source center-the air speed and IAA concentrations estimated by CFD were not significantly different from the measured values (P = 0.92 and P = 0.67, respectively). Thus, careful consideration of source boundary conditions greatly improved agreement with the measured values.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485
NASA Astrophysics Data System (ADS)
Ishijima, K.; Toyoda, S.; Sudo, K.; Yoshikawa, C.; Nanbu, S.; Aoki, S.; Nakazawa, T.; Yoshida, N.
2009-12-01
It is well known that isotopic information is useful to qualitatively understand cycles and constrain sources of some atmospheric species, but so far there has been no study to model N2O isotopomers throughout the atmosphere from the troposphere to the stratosphere, including realistic surface N2O isotopomers emissions. We have started to develop a model to simulate spatiotemporal variations of the atmospheric N2O isotopomers in both the troposphere and the stratosphere, based on a chemistry-coupled atmospheric general circulation model, in order to obtain more accurate quantitative understanding of the global N2O cycle. For surface emissions of the isotopomers, combination of EDGAR-based anthropogenic and soil fluxes and monthly varying GEIA oceanic fluxes are factored, using isotopic values of global total sources estimated from firn-air analyses based long-term trend of the atmospheric N2O isotopomers. Isotopic fractionations in chemical reactions are considered for photolysis and photo-oxidation of N2O in the stratosphere. The isotopic fractionation coefficients have been employed from studies based on laboratory experiments, but we also will test the coefficients determined by theoretical calculations. In terms of the global N2O isotopomer budgets, precise quantification of the sources is quite challenging, because even the spatiotemporal variabilities of N2O sources have never been adequately estimated. Therefore, we have firstly started validation of simulated isotopomer results in the stratosphere, by using the isotopomer profiles obtained by balloon observations. N2O concentration profiles are mostly well reproduced, partly because of realistic reproduction of dynamical processes by nudging with reanalysis meteorological data. However, the concentration in the polar vortex tends to be overestimated, probably due to relatively coarse wave-length resolution in photolysis calculation. Such model features also appear in the isotopomers results, which are almost underestimated, relative to the balloon observations, although the concentration is well simulated. The tendency has been somewhat improved by incorporating another photolysis scheme with slightly higher wave-length resolution into the model. From another point of view, these facts indicate that N2O isotopomers can be used for validation of the stratospheric photochemical calculations in model, because of very high sensitivity of the isotopomer ratio values to some settings such as the wave-length resolution in the photochemical scheme.Therefore, N2O isotopomers modeling seems to be not only useful for validation of the fractionation coefficients and of isotopic characterization of sources, but also have the possibility to be an index especially for precision in the stratospheric photolysis in model.
Effects of metals within ambient air particulate matter (PM) on human health.
Chen, Lung Chi; Lippmann, Morton
2009-01-01
We review literature providing insights on health-related effects caused by inhalation of ambient air particulate matter (PM) containing metals, emphasizing effects associated with in vivo exposures at or near contemporary atmospheric concentrations. Inhalation of much higher concentrations, and high-level exposures via intratracheal (IT) instillation that inform mechanistic processes, are also reviewed. The most informative studies of effects at realistic exposure levels, in terms of identifying influential individual PM components or source-related mixtures, have been based on (1) human and laboratory animal exposures to concentrated ambient particles (CAPs), and (2) human population studies for which both health-related effects were observed and PM composition data were available for multipollutant regression analyses or source apportionment. Such studies have implicated residual oil fly ash (ROFA) as the most toxic source-related mixture, and Ni and V, which are characteristic tracers of ROFA, as particularly influential components in terms of acute cardiac function changes and excess short-term mortality. There is evidence that other metals within ambient air PM, such as Pb and Zn, also affect human health. Most evidence now available is based on the use of ambient air PM components concentration data, rather than actual exposures, to determine significant associations and/or effects coefficients. Therefore, considerable uncertainties about causality are associated with exposure misclassification and measurement errors. As more PM speciation data and more refined modeling techniques become available, and as more CAPs studies involving PM component analyses are performed, the roles of specific metals and other components within PM will become clearer.
ERIC Educational Resources Information Center
Jackson, Suzanne F.; Kolla, Gillian
2012-01-01
In attempting to use a realistic evaluation approach to explore the role of Community Parents in early parenting programs in Toronto, a novel technique was developed to analyze the links between contexts (C), mechanisms (M) and outcomes (O) directly from experienced practitioner interviews. Rather than coding the interviews into themes in terms of…
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
Role-playing for more realistic technical skills training.
Nikendei, C; Zeuch, A; Dieckmann, P; Roth, C; Schäfer, S; Völkl, M; Schellberg, D; Herzog, W; Jünger, J
2005-03-01
Clinical skills are an important and necessary part of clinical competence. Simulation plays an important role in many fields of medical education. Although role-playing is common in communication training, there are no reports about the use of student role-plays in the training of technical clinical skills. This article describes an educational intervention with analysis of pre- and post-intervention self-selected student survey evaluations. After one term of skills training, a thorough evaluation showed that the skills-lab training did not seem very realistic nor was it very demanding for trainees. To create a more realistic training situation and to enhance students' involvement, case studies and role-plays with defined roles for students (i.e. intern, senior consultant) were introduced into half of the sessions. Results of the evaluation in the second term showed that sessions with role-playing were rated significantly higher than sessions without role-playing.
A moist Boussinesq shallow water equations set for testing atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerroukat, M., E-mail: mohamed.zerroukat@metoffice.gov.uk; Allen, T.
The shallow water equations have long been used as an initial test for numerical methods applied to atmospheric models with the test suite of Williamson et al. being used extensively for validating new schemes and assessing their accuracy. However the lack of physics forcing within this simplified framework often requires numerical techniques to be reworked when applied to fully three dimensional models. In this paper a novel two-dimensional shallow water equations system that retains moist processes is derived. This system is derived from three-dimensional Boussinesq approximation of the hydrostatic Euler equations where, unlike the classical shallow water set, we allowmore » the density to vary slightly with temperature. This results in extra (or buoyancy) terms for the momentum equations, through which a two-way moist-physics dynamics feedback is achieved. The temperature and moisture variables are advected as separate tracers with sources that interact with the mean-flow through a simplified yet realistic bulk moist-thermodynamic phase-change model. This moist shallow water system provides a unique tool to assess the usually complex and highly non-linear dynamics–physics interactions in atmospheric models in a simple yet realistic way. The full non-linear shallow water equations are solved numerically on several case studies and the results suggest quite realistic interaction between the dynamics and physics and in particular the generation of cloud and rain. - Highlights: • Novel shallow water equations which retains moist processes are derived from the three-dimensional hydrostatic Boussinesq equations. • The new shallow water set can be seen as a more general one, where the classical equations are a special case of these equations. • This moist shallow water system naturally allows a feedback mechanism from the moist physics increments to the momentum via buoyancy. • Like full models, temperature and moistures are advected as tracers that interact through a simplified yet realistic phase-change model. • This model is a unique tool to test numerical methods for atmospheric models, and physics–dynamics coupling, in a very realistic and simple way.« less
Realistic Library Research Methods: Bibliographic Sources Annotated.
ERIC Educational Resources Information Center
Kushon, Susan G.; Wells, Bernice
This guide gives an overview of basic library research methods with emphasis upon developing an understanding of library organization and professional services. Commonly used bibliographic techniques are described for various published and unpublished, print and nonprint materials. Standard reference sources (bibliographies, encyclopedias, annual…
SU-F-T-50: Evaluation of Monte Carlo Simulations Performance for Pediatric Brachytherapy Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzipapas, C; Kagadis, G; Papadimitroulas, P
Purpose: Pediatric tumors are generally treated with multi-modal procedures. Brachytherapy can be used with pediatric tumors, especially given that in this patient population low toxicity on normal tissues is critical as is the suppression of the probability for late malignancies. Our goal is to validate the GATE toolkit on realistic brachytherapy applications, and evaluate brachytherapy plans on pediatrics for accurate dosimetry on sensitive and critical organs of interest. Methods: The GATE Monte Carlo (MC) toolkit was used. Two High Dose Rate (HDR) 192Ir brachytherapy sources were simulated (Nucletron mHDR-v1 and Varian VS2000), and fully validated using the AAPM and ESTROmore » protocols. A realistic brachytherapy plan was also simulated using the XCAT anthropomorphic computational model .The simulated data were compared to the clinical dose points. Finally, a 14 years old girl with vaginal rhabdomyosarcoma was modelled based on clinical procedures for the calculation of the absorbed dose per organ. Results: The MC simulations resulted in accurate dosimetry in terms of dose rate constant (Λ), radial dose gL(r) and anisotropy function F(r,θ) for both sources.The simulations were executed using ∼1010 number of primaries resulting in statistical uncertainties lower than 2%.The differences between the theoretical values and the simulated ones ranged from 0.01% up to 3.3%, with the largest discrepancy (6%) being observed in the dose rate constant calculation.The simulated DVH using an adult female XCAT model was also compared to a clinical one resulting in differences smaller than 5%. Finally, a realistic pediatric brachytherapy simulation was performed to evaluate the absorbed dose per organ and to calculate DVH with respect to heterogeneities of the human anatomy. Conclusion: GATE is a reliable tool for brachytherapy simulations both for source modeling and for dosimetry in anthropomorphic voxelized models. Our project aims to evaluate a variety of pediatric brachytherapy schemes using a population of pediatric phantoms for several pathological cases. This study is part of a project that has received funding from the European Union Horizon2020 research and innovation programme under the MarieSklodowska-Curiegrantagreement.No691203.The results published in this study reflect only the authors view and the Research Executive Agency (REA) and the European Commission is not responsible for any use that may be madeof the information it contains.« less
Jones, Alexandra
2018-01-01
This essay considers the representation of women’s work and disability in British coalfields literature in the period 1880-1950. Industrial settings are a rich source for literature concerned with bodily health, injury and disability and offer insights into the gendering of the working body whether male or female. Situating this largely realist body of novels, stories and plays in its historical context, this article will focus on intersections between work, class and gender. It shows how the vital, but unpaid, work of women in domestic labour was depicted as an extension of the industrial machine, which had clear consequences in terms of high mortality and morbidity rates amongst women. PMID:29333333
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
NASA Astrophysics Data System (ADS)
Collins, Jarrod A.; Heiselman, Jon S.; Weis, Jared A.; Clements, Logan W.; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.
2017-03-01
In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.
Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis
2015-01-01
The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.
Effects of field-realistic doses of glyphosate on honeybee appetitive behaviour.
Herbert, Lucila T; Vázquez, Diego E; Arenas, Andrés; Farina, Walter M
2014-10-01
Glyphosate (GLY) is a broad-spectrum herbicide used for weed control. The sub-lethal impact of GLY on non-target organisms such as insect pollinators has not yet been evaluated. Apis mellifera is the main pollinator in agricultural environments and is a well-known model for behavioural research. Honeybees are also accurate biosensors of environmental pollutants and their appetitive behavioural response is a suitable tool with which to test sub-lethal effects of agrochemicals. We studied the effects of field-realistic doses of GLY on honeybees exposed chronically or acutely to the herbicide. We focused on sucrose sensitivity, elemental and non-elemental associative olfactory conditioning of the proboscis extension response (PER), and foraging-related behaviour. We found a reduced sensitivity to sucrose and learning performance for the groups chronically exposed to GLY concentrations within the range of recommended doses. When olfactory PER conditioning was performed with sucrose reward with the same GLY concentrations (acute exposure), elemental learning and short-term memory retention decreased significantly compared with controls. Non-elemental associative learning was also impaired by an acute exposure to GLY traces. Altogether, these results imply that GLY at concentrations found in agro-ecosystems as a result of standard spraying can reduce sensitivity to nectar reward and impair associative learning in honeybees. However, no effect on foraging-related behaviour was found. Therefore, we speculate that successful forager bees could become a source of constant inflow of nectar with GLY traces that could then be distributed among nestmates, stored in the hive and have long-term negative consequences on colony performance. © 2014. Published by The Company of Biologists Ltd.
How Big Was It? Getting at Yield
NASA Astrophysics Data System (ADS)
Pasyanos, M.; Walter, W. R.; Ford, S. R.
2013-12-01
One of the most coveted pieces of information in the wake of a nuclear test is the explosive yield. Determining the yield from remote observations, however, is not necessarily a trivial thing. For instance, recorded observations of seismic amplitudes, used to estimate the yield, are significantly modified by the intervening media, which varies widely, and needs to be properly accounted for. Even after correcting for propagation effects such as geometrical spreading, attenuation, and station site terms, getting from the resulting source term to a yield depends on the specifics of the explosion source model, including material properties, and depth. Some formulas are based on assumptions of the explosion having a standard depth-of-burial and observed amplitudes can vary if the actual test is either significantly overburied or underburied. We will consider the complications and challenges of making these determinations using a number of standard, more traditional methods and a more recent method that we have developed using regional waveform envelopes. We will do this comparison for recent declared nuclear tests from the DPRK. We will also compare the methods using older explosions at the Nevada Test Site with announced yields, material and depths, so that actual performance can be measured. In all cases, we also strive to quantify realistic uncertainties on the yield estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level, the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of PRA methodologies to conduct a mechanistic source term (MST) analysis for event sequences that could result in the release ofmore » radionuclides. The MST analysis seeks to realistically model and assess the transport, retention, and release of radionuclides from the reactor to the environment. The MST methods developed during this project seek to satisfy the requirements of the Mechanistic Source Term element of the ASME/ANS Non-LWR PRA standard. The MST methodology consists of separate analysis approaches for risk-significant and non-risk significant event sequences that may result in the release of radionuclides from the reactor. For risk-significant event sequences, the methodology focuses on a detailed assessment, using mechanistic models, of radionuclide release from the fuel, transport through and release from the primary system, transport in the containment, and finally release to the environment. The analysis approach for non-risk significant event sequences examines the possibility of large radionuclide releases due to events such as re-criticality or the complete loss of radionuclide barriers. This paper provides details on the MST methodology, including the interface between the MST analysis and other elements of the PRA, and provides a simplified example MST calculation for a sodium fast reactor.« less
J&K Fitness Supply Company: Auditing Inventory
ERIC Educational Resources Information Center
Clikeman, Paul M.
2012-01-01
This case provides auditing students with an opportunity to perform substantive tests of inventory using realistic-looking source documents. The learning objectives are to help students understand: (1) the procedures auditors perform in order to test inventory; (2) the source documents used in auditing inventory; and (3) the types of misstatements…
Historical Literature and Democratic Education. Occasional Paper.
ERIC Educational Resources Information Center
Scott, John Anthony
This document discusses the movement to bring original historical sources into the classroom. Because students of history need access to sources of information that provide direct or primary evidence about reality, teachers must show that realistic alternatives to the traditional history text exist. In the first section of this paper, efforts to…
Sulfates as chromophores for multiwavelength photoacoustic imaging phantoms
NASA Astrophysics Data System (ADS)
Fonseca, Martina; An, Lu; Beard, Paul; Cox, Ben
2017-12-01
As multiwavelength photoacoustic imaging becomes increasingly widely used to obtain quantitative estimates, the need for validation studies conducted on well-characterized experimental phantoms becomes ever more pressing. One challenge that such studies face is the design of stable, well-characterized phantoms and absorbers with properties in a physiologically realistic range. This paper performs a full experimental characterization of aqueous solutions of copper and nickel sulfate, whose properties make them close to ideal as chromophores in multiwavelength photoacoustic imaging phantoms. Their absorption varies linearly with concentration, and they mix linearly. The concentrations needed to yield absorption values within the physiological range are below the saturation limit. The shape of their absorption spectra makes them useful analogs for oxy- and deoxyhemoglobin. They display long-term photostability (no indication of bleaching) as well as resistance to transient effects (no saturable absorption phenomena), and are therefore suitable for exposure to typical pulsed photoacoustic light sources, even when exposed to the high number of pulses required in scanning photoacoustic imaging systems. In addition, solutions with tissue-realistic, predictable, and stable scattering can be prepared by mixing sulfates and Intralipid, as long as an appropriate emulsifier is used. Finally, the Grüneisen parameter of the sulfates was found to be larger than that of water and increased linearly with concentration.
NASA Technical Reports Server (NTRS)
Ghandi, P.; Annuar, A.; Lansbury, G. B.; Stern, D.; Alexander, D. M.; Bauer, F. E.; Bianchi, S.; Boggs, S. E.; Boorman, P. G.; Brandt, W. N.;
2017-01-01
We present NuSTAR X-ray observations of the active galactic nucleus (AGN) in NGC7674.The source shows a flat X-ray spectrum, suggesting that it is obscured by Compton-thick gas columns. Based upon long-term flux dimming, previous work suggested the alternate possibility that the source is a recently switched-off AGN with the observed X-rays being the lagged echo from the torus. Our high-quality data show the source to be reflection-dominated in hard X-rays, but with a relatively weak neutral Fe K(alpha) emission line (equivalent width [EW] of approximately 0.4 keV) and a strong Fe XXVI ionized line (EW approximately 0.2 keV).We construct an updated long-term X-ray light curve of NGC7674 and find that the observed 2-10 keV flux has remained constant for the past approximately 20 yr, following a high-flux state probed by Ginga. Light travel time arguments constrain the minimum radius of the reflector to be approximately 3.2 pc under the switched-off AGN scenario, approximately 30 times larger than the expected dust sublimation radius, rendering this possibility unlikely. A patchy Compton-thick AGN (CTAGN) solution is plausible, requiring a minimum line-of-sight column density (N(sub H)) of 3 x 10(exp 24) cm(exp -2) at present, and yields an intrinsic 2-10 keV luminosity of (3-5) x 10(exp 43) erg s(exp -1). Realistic uncertainties span the range of approximately (1-13) x 10(exp 43) erg s1. The source has one of the weakest fluorescence lines amongst bona fide CTAGN, and is potentially a local analogue of bolometrically luminous systems showing complex neutral and ionized Fe emission. It exemplifies the difficulty of identification and proper characterization of distant CTAGN based on the strength of the neutral Fe K line
NASA Astrophysics Data System (ADS)
Gandhi, P.; Annuar, A.; Lansbury, G. B.; Stern, D.; Alexander, D. M.; Bauer, F. E.; Bianchi, S.; Boggs, S. E.; Boorman, P. G.; Brandt, W. N.; Brightman, M.; Christensen, F. E.; Comastri, A.; Craig, W. W.; Del Moro, A.; Elvis, M.; Guainazzi, M.; Hailey, C. J.; Harrison, F. A.; Koss, M.; Lamperti, I.; Malaguti, G.; Masini, A.; Matt, G.; Puccetti, S.; Ricci, C.; Rivers, E.; Walton, D. J.; Zhang, W. W.
2017-06-01
We present NuSTAR X-ray observations of the active galactic nucleus (AGN) in NGC 7674. The source shows a flat X-ray spectrum, suggesting that it is obscured by Compton-thick gas columns. Based upon long-term flux dimming, previous work suggested the alternate possibility that the source is a recently switched-off AGN with the observed X-rays being the lagged echo from the torus. Our high-quality data show the source to be reflection-dominated in hard X-rays, but with a relatively weak neutral Fe Kα emission line (equivalent width [EW] of ≈ 0.4 keV) and a strong Fe xxvi ionized line (EW ≈ 0.2 keV). We construct an updated long-term X-ray light curve of NGC 7674 and find that the observed 2-10 keV flux has remained constant for the past ≈ 20 yr, following a high-flux state probed by Ginga. Light travel time arguments constrain the minimum radius of the reflector to be ˜ 3.2 pc under the switched-off AGN scenario, ≈ 30 times larger than the expected dust sublimation radius, rendering this possibility unlikely. A patchy Compton-thick AGN (CTAGN) solution is plausible, requiring a minimum line-of-sight column density (NH) of 3 × 1024 cm-2 at present, and yields an intrinsic 2-10 keV luminosity of (3-5) × 1043 erg s-1. Realistic uncertainties span the range of ≈ (1-13) × 1043 erg s-1. The source has one of the weakest fluorescence lines amongst bona fide CTAGN, and is potentially a local analogue of bolometrically luminous systems showing complex neutral and ionized Fe emission. It exemplifies the difficulty of identification and proper characterization of distant CTAGN based on the strength of the neutral Fe Kα line.
NASA Astrophysics Data System (ADS)
Sailhac, P.; Marquis, G.; Darnet, M.; Szalai, S.
2003-04-01
Surface self potential measurements (SP) are useful to characterize underground fluid flow or chemical reactions (as redox) and can be used in addition to NMR and electrical prospecting in hydrological investigations. Assuming that the SP anomalies have an electrokinetic origin, the source of SP data is the divergence of underground fluid flow; one important problem with surface SP data is then its interpretation in terms of fluid flow geometry. Some integral transform techniques have been shown to be powerful for SP interpretation (e.g. Fournier 1989, Patella, 1997; Sailhac &Marquis 2001). All these techniques are based upon Green’{ }s functions to characterize underground water flow, but they assume a constant electrical conductivity in the subsurface. This unrealistic approximation results in the appearance of non-electrokinetic sources at strong lateral electrical conductivity contrasts. We present here new Green’{ }s functions suitable for media of heterogeneous electrical conductivity. This new approach allows the joint interpretation of electrical resistivity tomography and SP measurements to detect electrokinetic sources caused by fluid flow. Tests on synthetic examples show that it gives more realistic results that when a constant electrical conductivity is assumed.
NASA Astrophysics Data System (ADS)
Tu, W.; Cunningham, G.; Reeves, G. D.; Chen, Y.; Henderson, M. G.; Blake, J. B.; Baker, D. N.; Spence, H.
2013-12-01
During the October 8-9 2012 storm, the MeV electron fluxes in the heart of the outer radiation belt are first wiped out then exhibit a three-orders-of-magnitude increase on the timescale of hours, as observed by the MagEIS and REPT instruments aboard the Van Allen Probes. There is strong observational evidence that the remarkable enhancement is due to local acceleration by chorus waves, as shown in the recent Science paper by Reeves et al.1. However, the importance of the dynamic electron source population transported in from the plasma sheet, to the observed remarkable enhancement, has not been studied. We illustrate the importance of the source population with our simulation of the event using the DREAM 3D diffusion model. Three new modifications have been implemented in the model: 1) incorporating a realistic and time-dependent low-energy boundary condition at 100 keV obtained from the MagEIS data; 2) utilizing event-specific chorus wave distributions derived from the low-energy electron precipitation observed by POES and validated against the in situ wave data from EMFISIS; 3) using an ';open' boundary condition at L*=11 and implementing electron lifetimes on the order of the drift period outside the solar-wind driven last closed drift shell. The model quantitatively reproduces the MeV electron dynamics during this event, including the fast dropout at the start of Oct. 8th, low electron flux during the first Dst dip, and the remarkable enhancement peaked at L*=4.2 during the second Dst dip. By comparing the model results with realistic source population against those with constant low-energy boundary (see figure), we find that the realistic electron source population is critical to reproduce the observed fast and significant increase of MeV electrons. 1Reeves, G. D., et al. (2013), Electron Acceleration in the Heart of the Van Allen Radiation Belts, Science, DOI:10.1126/science.1237743. Comparison between data and model results during the October 2012 storm for electrons at μ=3168 MeV/G and K=0.1 G1/2Re. Top plot is the electron phase space density data measured by the two Van Allen Probes; middle plot shows the results from the DREAM 3D diffusion model with a realistic electron source population derived from MagEIS data; and the bottom plot is the model results with a constant source population.
Evaluations on the potential productivity of winter wheat based on agro-ecological zone in the world
NASA Astrophysics Data System (ADS)
Wang, H.; Li, Q.; Du, X.; Zhao, L.; Lu, Y.; Li, D.; Liu, J.
2015-04-01
Wheat is the most widely grown crop globally and an essential source of calories in human diets. Maintaining and increasing global wheat production is therefore strongly linked to food security. In this paper, the evaluation model of winter wheat potential productivity was proposed based on agro-ecological zone and the historical winter wheat yield data in recent 30 years (1983-2011) obtained from FAO. And the potential productions of winter wheat in the world were investigated. The results showed that the realistic potential productivity of winter wheat in Western Europe was highest and it was more than 7500 kg/hm2. The realistic potential productivity of winter wheat in North China Plain were also higher, which was about 6000 kg/hm2. However, the realistic potential productivity of winter wheat in the United States which is the main winter wheat producing country were not high, only about 3000 kg/hm2. In addition to these regions which were the main winter wheat producing areas, the realistic potential productivity in other regions of the world were very low and mainly less than 1500 kg/hm2, like in southwest region of Russia. The gaps between potential productivity and realistic productivity of winter wheat in Kazakhstan and India were biggest, and the percentages of the gap in realistic productivity of winter wheat in Kazakhstan and India were more than 40%. In Russia, the gap between potential productivity and realistic productivity of winter wheat was lowest and the percentage of the gap in realistic productivity of winter wheat in Russia was only 10%.
Simulated response and effects to oil exposure in an estuarine fish species
Experimental toxicity data alone lack ecological relevance to assess more realistic situations, such as variable exposure to a contaminant and long-term impact. Evaluating the implications of sublethal effects or behavioral response to exposure requires long-term, population-leve...
Open Source Projects in Software Engineering Education: A Mapping Study
ERIC Educational Resources Information Center
Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina
2015-01-01
Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study…
PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH
A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...
Waveform inversion of acoustic waves for explosion yield estimation
Kim, K.; Rodgers, A. J.
2016-07-08
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
Waveform inversion of acoustic waves for explosion yield estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Rodgers, A. J.
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
NASA Astrophysics Data System (ADS)
Green, D. N.; Neuberg, J.; Cayol, V.
2006-05-01
Surface deformations recorded in close proximity to the active lava dome at Soufrière Hills volcano, Montserrat, can be used to infer stresses within the uppermost 1000 m of the conduit system. Most deformation source models consider only isotropic pressurisation of the conduit. We show that tilt recorded during rapid magma extrusion in 1997 could have also been generated by shear stresses sustained along the conduit wall; these stresses are a consequence of pressure gradients that develop along the conduit. Numerical modelling, incorporating realistic topography, can reproduce both the morphology and half the amplitude of the measured deformation field using a realistic shear stress amplitude, equivalent to a pressure gradient of 3.5 × 104 Pa m-1 along a 1000 m long conduit with a 15 m radius. This shear stress model has advantages over the isotropic pressure models because it does not require either physically unattainable overpressures or source radii larger than 200 m to explain the same deformation.
NASA Astrophysics Data System (ADS)
Landazuri, Andrea C.
This dissertation focuses on aerosol transport modeling in occupational environments and mining sites in Arizona using computational fluid dynamics (CFD). The impacts of human exposure in both environments are explored with the emphasis on turbulence, wind speed, wind direction and particle sizes. Final emissions simulations involved the digitalization process of available elevation contour plots of one of the mining sites to account for realistic topographical features. The digital elevation map (DEM) of one of the sites was imported to COMSOL MULTIPHYSICSRTM for subsequent turbulence and particle simulations. Simulation results that include realistic topography show considerable deviations of wind direction. Inter-element correlation results using metal and metalloid size resolved concentration data using a Micro-Orifice Uniform Deposit Impactor (MOUDI) under given wind speeds and directions provided guidance on groups of metals that coexist throughout mining activities. Groups between Fe-Mg, Cr-Fe, Al-Sc, Sc-Fe, and Mg-Al are strongly correlated for unrestricted wind directions and speeds, suggesting that the source may be of soil origin (e.g. ore and tailings); also, groups of elements where Cu is present, in the coarse fraction range, may come from mechanical action mining activities and saltation phenomenon. Besides, MOUDI data under low wind speeds (<2 m/s) and at night showed a strong correlation for 1 mum particles between the groups: Sc-Be-Mg, Cr-Al, Cu-Mn, Cd-Pb-Be, Cd-Cr, Cu-Pb, Pb-Cd, As-Cd-Pb. The As-Cd-Pb correlates strongly in almost all ranges of particle sizes. When restricted low wind speeds were imposed more groups of elements are evident and this may be justified with the fact that at lower speeds particles are more likely to settle. When linking these results with CFD simulations and Pb-isotope results it is concluded that the source of elements found in association with Pb in the fine fraction come from the ore that is subsequently processed in the smelter site, whereas the source of elements associated to Pb in the coarse fraction is of different origin. CFD simulation results will not only provide realistic and quantifiable information in terms of potential deleterious effects, but also that the application of CFD represents an important contribution to actual dispersion modeling studies; therefore, Computational Fluid Dynamics can be used as a source apportionment tool to identify areas that have an effect over specific sampling points and susceptible regions under certain meteorological conditions, and these conclusions can be supported with inter-element correlation matrices and lead isotope analysis, especially since there is limited access to the mining sites. Additional results concluded that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail, provides higher number of locations with monotonic convergence than the manual grids, and requires the least computational effort. CFD simulations were approached using the k-epsilon model, with the aid of computer aided engineering software: ANSYSRTM and COMSOL MULTIPHYSICS RTM. The success of aerosol transport simulations depends on a good simulation of the turbulent flow. A lot of attention was placed on investigating and choosing the best models in terms of convergence, independence and computational effort. This dissertation also includes preliminary studies of transient discrete phase, eulerian and species transport modeling, importance of saltation of particles, information on CFD methods, and strategies for future directions that should be taken.
Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi
2017-01-01
Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle). PMID:28608824
Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi
2017-06-13
Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle).
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
Modeling of Selenium for the San Diego Creek Watershed and Newport Bay, California
Presser, Theresa S.; Luoma, Samuel N.
2009-01-01
The San Diego Creek watershed and Newport Bay in southern California are contaminated with selenium (Se) as a result of groundwater associated with urban development overlying a historical wetland, the Swamp of the Frogs. The primary Se source is drainage from surrounding seleniferous marine sedimentary formations. An ecosystem-scale model was employed as a tool to assist development of a site-specific Se objective for the region. The model visualizes outcomes of different exposure scenarios in terms of bioaccumulation in predators using partitioning coefficients, trophic transfer factors, and site-specific data for food-web inhabitants and particulate phases. Predicted Se concentrations agreed well with field observations, validating the use of the model as realistic tool for testing exposure scenarios. Using the fish tissue and bird egg guidelines suggested by regulatory agencies, allowable water concentrations were determined for different conditions and locations in the watershed and the bay. The model thus facilitated development of a site-specific Se objective that was locally relevant and provided a basis for step-by-step implementation of source control.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas
2017-10-01
In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.
On the mechanical theory for biological pattern formation
NASA Astrophysics Data System (ADS)
Bentil, D. E.; Murray, J. D.
1993-02-01
We investigate the pattern-forming potential of mechanical models in embryology proposed by Oster, Murray and their coworkers. We show that the presence of source terms in the tissue extracellular matrix and cell density equations give rise to spatio-temporal oscillations. An extension of one such model to include ‘biologically realistic long range effects induces the formation of stationary spatial patterns. Previous attempts to solve the full system were in one dimension only. We obtain solutions in one dimension and extend our simulations to two dimensions. We show that a single mechanical model alone is capable of generating complex but regular spatial patterns rather than the requirement of model interaction as suggested by Nagorcka et al. and Shaw and Murray. We discuss some biological applications of the models among which are would healing and formation of dermatoglyphic (fingerprint) patterns.
NASA Astrophysics Data System (ADS)
Mende, Denis; Böttger, Diana; Löwer, Lothar; Becker, Holger; Akbulut, Alev; Stock, Sebastian
2018-02-01
The European power grid infrastructure faces various challenges due to the expansion of renewable energy sources (RES). To conduct investigations on interactions between power generation and the power grid, models for the power market as well as for the power grid are necessary. This paper describes the basic functionalities and working principles of both types of models as well as steps to couple power market results and the power grid model. The combination of these models is beneficial in terms of gaining realistic power flow scenarios in the grid model and of being able to pass back results of the power flow and restrictions to the market model. Focus is laid on the power grid model and possible application examples like algorithms in grid analysis, operation and dynamic equipment modelling.
Tsatsakis, Aristidis M; Docea, Anca Oana; Tsitsimpikou, Christina
2016-10-01
The general population experiences uncontrolled multi-chemicals exposure from many different sources at doses around or well below regulatory limits. Therefore, traditional chronic toxicity evaluations for a single chemical could possibly miss to identify adequately all the risks. For this an experimental methodology that has the ambition to provide at one strike multi-answers to multi-questions is hereby proposed: a long-term toxicity study of non-commercial chemical mixtures, consisting of common everyday life chemicals (pesticides, food additives, life-style products components) at low and realistic dose levels around the regulatory limits and with the simultaneous investigation of several key endpoints, like genotoxicity, endocrine disruption, target organ toxicity including the heart and systemic mechanistic pathways, like oxidative stress. Copyright © 2016 Elsevier Ltd. All rights reserved.
Characterization of electrophysiological propagation by multichannel sensors
Bradshaw, L. Alan; Kim, Juliana H.; Somarajan, Suseela; Richards, William O.; Cheng, Leo K.
2016-01-01
Objective The propagation of electrophysiological activity measured by multichannel devices could have significant clinical implications. Gastric slow waves normally propagate along longitudinal paths that are evident in recordings of serosal potentials and transcutaneous magnetic fields. We employed a realistic model of gastric slow wave activity to simulate the transabdominal magnetogastrogram (MGG) recorded in a multichannel biomagnetometer and to determine characteristics of electrophysiological propagation from MGG measurements. Methods Using MGG simulations of slow wave sources in a realistic abdomen (both superficial and deep sources) and in a horizontally-layered volume conductor, we compared two analytic methods (Second Order Blind Identification, SOBI and Surface Current Density, SCD) that allow quantitative characterization of slow wave propagation. We also evaluated the performance of the methods with simulated experimental noise. The methods were also validated in an experimental animal model. Results Mean square errors in position estimates were within 2 cm of the correct position, and average propagation velocities within 2 mm/s of the actual velocities. SOBI propagation analysis outperformed the SCD method for dipoles in the superficial and horizontal layer models with and without additive noise. The SCD method gave better estimates for deep sources, but did not handle additive noise as well as SOBI. Conclusion SOBI-MGG and SCD-MGG were used to quantify slow wave propagation in a realistic abdomen model of gastric electrical activity. Significance These methods could be generalized to any propagating electrophysiological activity detected by multichannel sensor arrays. PMID:26595907
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo
2016-01-01
Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin
2016-12-01
Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.
NASA Astrophysics Data System (ADS)
Krol, M.; Kokkinaki, A.; Sleep, B.
2014-12-01
The persistence of dense-non-aqueous-phase liquids (DNAPLs) in the subsurface has led practitioners and regulatory agencies to turn towards low-maintenance, low-cost remediation methods. Biological degradation has been suggested as a possible solution, based on the well-proven ability of certain microbial species to break down dissolved chlorinated ethenes under favorable conditions. However, the biodegradation of pure phase chlorinated ethenes is subject to additional constraints: the continuous release of electron acceptor at a rate governed by mass transfer kinetics, and the temporal and spatial heterogeneity of DNAPL source zones which leads to spatially and temporally variable availability of the reactants for reductive dechlorination. In this work, we investigate the relationship between various DNAPL source zone characteristics and reaction kinetics using COMPSIM, a multiphase groundwater model that considers non-equilibrium mass transfer and Monod-type kinetics for reductive dechlorination. Numerical simulations are performed for simple, homogeneous trichloroethene DNAPL source zones to demonstrate the effect of single source zone characteristics, as well as for larger, more realistic heterogeneous source zones. It is shown that source zone size, and mass transfer kinetics may have a decisive effect on the predicted bio-enhancement. Finally, we evaluate the performance of DNAPL bioremediation for realistic, thermodynamically constrained, concentrations of electron donor. Our results indicate that the latter may be the most important limitation for the success of DNAPL bioremediation, leading to reduced bio-enhancement and, in many cases, comparable performance with water flooding.
Test methods for environment-assisted cracking
NASA Astrophysics Data System (ADS)
Turnbull, A.
1992-03-01
The test methods for assessing environment assisted cracking of metals in aqueous solution are described. The advantages and disadvantages are examined and the interrelationship between results from different test methods is discussed. The source of differences in susceptibility to cracking occasionally observed from the varied mechanical test methods arises often from the variation between environmental parameters in the different test conditions and the lack of adequate specification, monitoring, and control of environmental variables. Time is also a significant factor when comparing results from short term tests with long exposure tests. In addition to these factors, the intrinsic difference in the important mechanical variables, such as strain rate, associated with the various mechanical tests methods can change the apparent sensitivity of the material to stress corrosion cracking. The increasing economic pressure for more accelerated testing is in conflict with the characteristic time dependence of corrosion processes. Unreliable results may be inevitable in some cases but improved understanding of mechanisms and the development of mechanistically based models of environment assisted cracking which incorporate the key mechanical, material, and environmental variables can provide the framework for a more realistic interpretation of short term data.
Fehr, M
2014-09-01
Business opportunities in the household waste sector in emerging economies still evolve around the activities of bulk collection and tipping with an open material balance. This research, conducted in Brazil, pursued the objective of shifting opportunities from tipping to reverse logistics in order to close the balance. To do this, it illustrated how specific knowledge of sorted waste composition and reverse logistics operations can be used to determine realistic temporal and quantitative landfill diversion targets in an emerging economy context. Experimentation constructed and confirmed the recycling trilogy that consists of source separation, collection infrastructure and reverse logistics. The study on source separation demonstrated the vital difference between raw and sorted waste compositions. Raw waste contained 70% biodegradable and 30% inert matter. Source separation produced 47% biodegradable, 20% inert and 33% mixed material. The study on collection infrastructure developed the necessary receiving facilities. The study on reverse logistics identified private operators capable of collecting and processing all separated inert items. Recycling activities for biodegradable material were scarce and erratic. Only farmers would take the material as animal feed. No composting initiatives existed. The management challenge was identified as stimulating these activities in order to complete the trilogy and divert the 47% source-separated biodegradable discards from the landfills. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Poupardin, A.; Heinrich, P.; Hébert, H.; Schindelé, F.; Jamelot, A.; Reymond, D.; Sugioka, H.
2018-05-01
This paper evaluates the importance of frequency dispersion in the propagation of recent trans-Pacific tsunamis. Frequency dispersion induces a time delay for the most energetic waves, which increases for long propagation distances and short source dimensions. To calculate this time delay, propagation of tsunamis is simulated and analyzed from spectrograms of time-series at specific gauges in the Pacific Ocean. One- and two-dimensional simulations are performed by solving either shallow water or Boussinesq equations and by considering realistic seismic sources. One-dimensional sensitivity tests are first performed in a constant-depth channel to study the influence of the source width. Two-dimensional tests are then performed in a simulated Pacific Ocean with a 4000-m constant depth and by considering tectonic sources of 2010 and 2015 Chilean earthquakes. For these sources, both the azimuth and the distance play a major role in the frequency dispersion of tsunamis. Finally, simulations are performed considering the real bathymetry of the Pacific Ocean. Multiple reflections, refractions as well as shoaling of waves result in much more complex time series for which the effects of the frequency dispersion are hardly discernible. The main point of this study is to evaluate frequency dispersion in terms of traveltime delays by calculating spectrograms for a time window of 6 hours after the arrival of the first wave. Results of the spectral analysis show that the wave packets recorded by pressure and tide sensors in the Pacific Ocean seem to be better reproduced by the Boussinesq model than the shallow water model and approximately follow the theoretical dispersion relationship linking wave arrival times and frequencies. Additionally, a traveltime delay is determined above which effects of frequency dispersion are considered to be significant in terms of maximum surface elevations.
Bumblebee learning and memory is impaired by chronic exposure to a neonicotinoid pesticide
Stanley, Dara A.; Smith, Karen E.; Raine, Nigel E.
2015-01-01
Bumblebees are exposed to pesticides applied for crop protection while foraging on treated plants, with increasing evidence suggesting that this sublethal exposure has implications for pollinator declines. The challenges of navigating and learning to manipulate many different flowers underline the critical role learning plays for the foraging success and survival of bees. We assessed the impacts of both acute and chronic exposure to field-realistic levels of a widely applied neonicotinoid insecticide, thiamethoxam, on bumblebee odour learning and memory. Although bees exposed to acute doses showed conditioned responses less frequently than controls, we found no difference in the number of individuals able to learn at field-realistic exposure levels. However, following chronic pesticide exposure, bees exposed to field-realistic levels learnt more slowly and their short-term memory was significantly impaired following exposure to 2.4 ppb pesticide. These results indicate that field-realistic pesticide exposure can have appreciable impacts on learning and memory, with potential implications for essential individual behaviour and colony fitness. PMID:26568480
NASA Astrophysics Data System (ADS)
Griffiths, J.; Riley, M. J. W.; Borman, A.; Dowding, C.; Kirk, A.; Bickerton, R.
2015-03-01
Laser induced spark ignition offers the potential for greater reliability and consistency in ignition of lean air/fuel mixtures. This increased reliability is essential for the application of gas turbines as primary or secondary reserve energy sources in smart grid systems, enabling the integration of renewable energy sources whose output is prone to fluctuation over time. This work details a study into the effect of flow velocity and temperature on minimum ignition energies in laser-induced spark ignition in an atmospheric combustion test rig, representative of a sub 15 MW industrial gas turbine (Siemens Industrial Turbomachinery Ltd., Lincoln, UK). Determination of minimum ignition energies required for a range of temperatures and flow velocities is essential for establishing an operating window in which laser-induced spark ignition can operate under realistic, engine-like start conditions. Ignition of a natural gas and air mixture at atmospheric pressure was conducted using a laser ignition system utilizing a Q-switched Nd:YAG laser source operating at 532 nm wavelength and 4 ns pulse length. Analysis of the influence of flow velocity and temperature on ignition characteristics is presented in terms of required photon flux density, a useful parameter to consider during the development laser ignition systems.
On Evaluating a Project: Some Practical Suggestions. NCME Measurement in Education, Vol. 6, No. 1.
ERIC Educational Resources Information Center
Wick, John W.
Prime indicators for realistic short term/long term project goals are budgets and timetables. Concrets, identifiable objects are useful in separating eloquent rhetoric from actual promises. Similarly, an external evaluator should be able to separate proposals with intentional misrepresentation of funding and goals from those which need further…
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane
2016-01-01
Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.
Radio weak lensing shear measurement in the visibility domain - II. Source extraction
NASA Astrophysics Data System (ADS)
Rivi, M.; Miller, L.
2018-05-01
This paper extends the method introduced in Rivi et al. (2016b) to measure galaxy ellipticities in the visibility domain for radio weak lensing surveys. In that paper, we focused on the development and testing of the method for the simple case of individual galaxies located at the phase centre, and proposed to extend it to the realistic case of many sources in the field of view by isolating visibilities of each source with a faceting technique. In this second paper, we present a detailed algorithm for source extraction in the visibility domain and show its effectiveness as a function of the source number density by running simulations of SKA1-MID observations in the band 950-1150 MHz and comparing original and measured values of galaxies' ellipticities. Shear measurements from a realistic population of 104 galaxies randomly located in a field of view of 1 \\deg ^2 (i.e. the source density expected for the current radio weak lensing survey proposal with SKA1) are also performed. At SNR ≥ 10, the multiplicative bias is only a factor 1.5 worse than what found when analysing individual sources, and is still comparable to the bias values reported for similar measurement methods at optical wavelengths. The additive bias is unchanged from the case of individual sources, but it is significantly larger than typically found in optical surveys. This bias depends on the shape of the uv coverage and we suggest that a uv-plane weighting scheme to produce a more isotropic shape could reduce and control additive bias.
Maidment, Ian; Booth, Andrew; Mullan, Judy; McKeown, Jane; Bailey, Sylvia; Wong, Geoffrey
2017-07-03
Medication-related adverse events have been estimated to be responsible for 5700 deaths and cost the UK £750 million annually. This burden falls disproportionately on older people. Outcomes from interventions to optimise medication management are caused by multiple context-sensitive mechanisms. The MEdication Management in Older people: REalist Approaches BAsed on Literature and Evaluation (MEMORABLE) project uses realist synthesis to understand how, why, for whom and in what context interventions, to improve medication management in older people on complex medication regimes residing in the community, work. This realist synthesis uses secondary data and primary data from interviews to develop the programme theory. A realist logic of analysis will synthesise data both within and across the two data sources to inform the design of a complex intervention(s) to help improve medication management in older people. 1. Literature review The review (using realist synthesis) contains five stages to develop an initial programme theory to understand why processes are more or less successful and under which situations: focussing of the research question; developing the initial programme theory; developing the search strategy; selection and appraisal based on relevance and rigour; and data analysis/synthesis to develop and refine the programme theory and context, intervention and mechanism configurations. 2. Realist interviews Realist interviews will explore and refine our understanding of the programme theory developed from the realist synthesis. Up to 30 older people and their informal carers (15 older people with multi-morbidity, 10 informal carers and 5 older people with dementia), and 20 care staff will be interviewed. 3. Developing framework for the intervention(s) Data from the realist synthesis and interviews will be used to refine the programme theory for the intervention(s) to identify: the mechanisms that need to be 'triggered', and the contexts related to these mechanisms. Intervention strategies that change the contexts so the mechanisms are triggered to produce desired outcomes will be developed. Feedback on these strategies will be obtained. This realist synthesis aims to develop a framework (underpinned by our programme theory) for a novel multi-disciplinary, multi-agency intervention(s), to improve medication management in community-dwelling older people on complex medication regimens. PROSPERO CRD42016043506.
NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.
Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul
2014-09-30
As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.
Harte, John; Saleska, Scott R; Levy, Charlotte
2015-06-01
Ecosystem responses to climate change can exert positive or negative feedbacks on climate, mediated in part by slow-moving factors such as shifts in vegetation community composition. Long-term experimental manipulations can be used to examine such ecosystem responses, but they also present another opportunity: inferring the extent to which contemporary climate change is responsible for slow changes in ecosystems under ambient conditions. Here, using 23 years of data, we document a shift from nonwoody to woody vegetation and a loss of soil carbon in ambient plots and show that these changes track previously shown similar but faster changes under experimental warming. This allows us to infer that climate change is the cause of the observed shifts in ambient vegetation and soil carbon and that the vegetation responses mediate the observed changes in soil carbon. Our findings demonstrate the realism of an experimental manipulation, allow attribution of a climate cause to observed ambient ecosystem changes, and demonstrate how a combination of long-term study of ambient and experimental responses to warming can identify mechanistic drivers needed for realistic predictions of the conditions under which ecosystems are likely to become carbon sources or sinks over varying timescales. © 2014 John Wiley & Sons Ltd.
Searches for millisecond pulsations in low-mass X-ray binaries
NASA Technical Reports Server (NTRS)
Wood, K. S.; Hertz, P.; Norris, J. P.; Vaughan, B. A.; Michelson, P. F.; Mitsuda, K.; Lewin, W. H. G.; Van Paradijs, J.; Penninx, W.; Van Der Klis, M.
1991-01-01
High-sensitivity search techniques for millisecond periods are presented and applied to data from the Japanese satellite Ginga and HEAO 1. The search is optimized for pulsed signals whose period, drift rate, and amplitude conform with what is expected for low-class X-ray binary (LMXB) sources. Consideration is given to how the current understanding of LMXBs guides the search strategy and sets these parameter limits. An optimized one-parameter coherence recovery technique (CRT) developed for recovery of phase coherence is presented. This technique provides a large increase in sensitivity over the method of incoherent summation of Fourier power spectra. The range of spin periods expected from LMXB phenomenology is discussed, the necessary constraints on the application of CRT are described in terms of integration time and orbital parameters, and the residual power unrecovered by the quadratic approximation for realistic cases is estimated.
Shielding and activation calculations around the reactor core for the MYRRHA ADS design
NASA Astrophysics Data System (ADS)
Ferrari, Anna; Mueller, Stefan; Konheiser, J.; Castelliti, D.; Sarotto, M.; Stankovskiy, A.
2017-09-01
In the frame of the FP7 European project MAXSIMA, an extensive simulation study has been done to assess the main shielding problems in view of the construction of the MYRRHA accelerator-driven system at SCK·CEN in Mol (Belgium). An innovative method based on the combined use of the two state-of-the-art Monte Carlo codes MCNPX and FLUKA has been used, with the goal to characterize complex, realistic neutron fields around the core barrel, to be used as source terms in detailed analyses of the radiation fields due to the system in operation, and of the coupled residual radiation. The main results of the shielding analysis are presented, as well as the construction of an activation database of all the key structural materials. The results evidenced a powerful way to analyse the shielding and activation problems, with direct and clear implications on the design solutions.
Pseudo-polar drive patterns for brain electrical impedance tomography.
Shi, Xuetao; Dong, Xiuzhen; Shuai, Wanjun; You, Fusheng; Fu, Feng; Liu, Ruigang
2006-11-01
Brain electrical impedance tomography (EIT) is a difficult task as brain tissues are enclosed by the skull of high resistance and cerebrospinal fluid (CSF) of low resistance, which makes internal resistivity information more difficult to extract. In order to seek a single source drive pattern that is more suitable for brain EIT, we built a more realistic experimental setting that simulates a head with the resistivity of the scalp, skull, CSF and brain, and compared the performance of adjacent, cross, polar and pseudo-polar drive patterns in terms of the boundary voltage dynamic range, independent measurement number, total boundary voltage changes and anti-noise performance based on it. The results demonstrate that the pseudo-polar drive pattern is optimal in all the aspects except for the dynamic range. The polar and cross drive patterns come next, and the adjacent drive pattern is the worst. Therefore, the pseudo-polar drive pattern should be chosen for brain EIT.
Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera
NASA Technical Reports Server (NTRS)
Stanojev, B. J.; Houts, M.
2004-01-01
Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.
A Class of Exact Solutions of the Boussinesq Equation for Horizontal and Sloping Aquifers
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Porporato, A.
2018-02-01
The nonlinear equation of Boussinesq (1877) is a foundational approach for studying groundwater flow through an unconfined aquifer, but solving the full nonlinear version of the Boussinesq equation remains a challenge. Here, we present an exact solution to the full nonlinear Boussinesq equation that not only applies to sloping aquifers but also accounts for source and sink terms such as bedrock seepage, an often significant flux in headwater catchments. This new solution captures the hysteretic relationship (a loop rating curve) between the groundwater flow rate and the water table height, which may be used to provide a more realistic representation of streamflow and groundwater dynamics in hillslopes. In addition, the solution provides an expression where the flow recession varies based on hillslope parameters such as bedrock slope, bedrock seepage, aquifer recharge, plant transpiration, and other factors that vary across landscape types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, H.J.
1998-06-18
UO{sub 3} powder is stored at the T-hopper storage area associated with the 2714-U building in the 200 west area. The T-hopper containers and 13 drums containing this material are used to store the powder on pads immediately north of the building. An interim safety basis document (WHC,1996) was issued in 1996 for the UO{sub 3} powder storage area. In this document the isotope {sup 99}Tc was not included in the source term used to calculate the radiological consequences of a postulated release of the powder. A calculations note (HNF, 1998) was issued to remedy that deficiency. The present documentmore » is a revision to that document to reflect updated data concerning the solubility of UO{sub 3} in simulated lung fluid and to utilize more realistic powder release fractions.« less
A Physics-Based Vibrotactile Feedback Library for Collision Events.
Park, Gunhyuk; Choi, Seungmoon
2017-01-01
We present PhysVib: a software solution on the mobile platform extending an open-source physics engine in a multi-rate rendering architecture for automatic vibrotactile feedback upon collision events. PhysVib runs concurrently with a physics engine at a low update rate and generates vibrotactile feedback commands at a high update rate based on the simulation results of the physics engine using an exponentially-decaying sinusoidal model. We demonstrate through a user study that this vibration model is more appropriate to our purpose in terms of perceptual quality than more complex models based on sound synthesis. We also evaluated the perceptual performance of PhysVib by comparing eight vibrotactile rendering methods. Experimental results suggested that PhysVib enables more realistic vibrotactile feedback than the other methods as to perceived similarity to the visual events. PhysVib is an effective solution for providing physically plausible vibrotactile responses while reducing application development time to great extent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Jerden, James
A mechanistic source term (MST) calculation attempts to realistically assess the transport and release of radionuclides from a reactor system to the environment during a specific accident sequence. The U.S. Nuclear Regulatory Commission (NRC) has repeatedly stated its expectation that advanced reactor vendors will utilize an MST during the U.S. reactor licensing process. As part of a project to examine possible impediments to sodium fast reactor (SFR) licensing in the U.S., an analysis was conducted regarding the current capabilities to perform an MST for a metal fuel SFR. The purpose of the project was to identify and prioritize any gapsmore » in current computational tools, and the associated database, for the accurate assessment of an MST. The results of the study demonstrate that an SFR MST is possible with current tools and data, but several gaps exist that may lead to possibly unacceptable levels of uncertainty, depending on the goals of the MST analysis.« less
NASA Technical Reports Server (NTRS)
Shaw, Eric J.
2001-01-01
This paper will report on the activities of the IAA Launcher Systems Economics Working Group in preparations for its Launcher Systems Development Cost Behavior Study. The Study goals include: improve launcher system and other space system parametric cost analysis accuracy; improve launcher system and other space system cost analysis credibility; and provide launcher system and technology development program managers and other decisionmakers with useful information on development cost impacts of their decisions. The Working Group plans to explore at least the following five areas in the Study: define and explain development cost behavior terms and concepts for use in the Study; identify and quantify sources of development cost and cost estimating uncertainty; identify and quantify significant influences on development cost behavior; identify common barriers to development cost understanding and reduction; and recommend practical, realistic strategies to accomplish reductions in launcher system development cost.
Singh, Gurdeep
2006-11-01
The comparison with National Ambient Air Quality Standards does not always depict a true picture of the Air Quality Status of a study area. As an alternative an index that measures depreciation in Air Quality on more realistic terms has been proposed and applied to the ambient air monitoring data collected from some areas of Korba Coalfields in India. Results have been discussed in detail to illustrate the application of the proposed index and utility in bringing out more realistic air quality assessment.
Realistic molecular model of kerogen's nanostructure
NASA Astrophysics Data System (ADS)
Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit
2016-05-01
Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.
Realistic molecular model of kerogen's nanostructure.
Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit
2016-05-01
Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.
NASA Astrophysics Data System (ADS)
Ángel López Comino, José; Kriegerowski, Marius; Cesca, Simone; Dahm, Torsten; Mirek, Janusz; Lasocki, Stanislaw
2016-04-01
Hydraulic fracturing is considered among the human operations which could induce or trigger seismicity or microseismic activity. The influence of hydraulic fracturing operations is typically expected in terms of weak magnitude events. However, the sensitivity of the rock mass to trigger seismicity varies significantly for different sites and cannot be easily predicted prior to operations. In order to assess the sensitivity of microseismity to hydraulic fracturing operations, we perform a seismic monitoring at a shale gas exploration/exploitation site in the central-western part of the Peribaltic synclise at Pomerania (Poland). The monitoring will be continued before, during and after the termination of hydraulic fracturing operations. The fracking operations are planned in April 2016 at a depth 4000 m. A specific network setup has been installed since summer 2015, including a distributed network of broadband stations and three small-scale arrays. The network covers a region of 60 km2. The aperture of small scale arrays is between 450 and 950 m. So far no fracturing operations have been performed, but seismic data can already be used to assess the seismic noise and background microseismicity, and to investigate and assess the detection performance of our monitoring setup. Here we adopt a recently developed tool to generate a synthetic catalogue and waveform dataset, which realistically account for the expected microseismicity. Synthetic waveforms are generated for a local crustal model, considering a realistic distribution of hypocenters, magnitudes, moment tensors, and source durations. Noise free synthetic seismograms are superposed to real noise traces, to reproduce true monitoring conditions at the different station locations. We estimate the detection probability for different magnitudes, source-receiver distances, and noise conditions. This information is used to estimate the magnitude of completeness at the depth of the hydraulic fracturing horizontal wells. Our technique is useful to evaluate the efficiency of the seismic network and validate detection and location algorithms, taking into account the signal to noise ratio. The same dataset may be used at a later time, to assess the performance of other seismological analysis, such as hypocentral location, magnitude estimation and source parameters inversion. This work is funded by the EU H2020 SHEER project.
A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys
NASA Astrophysics Data System (ADS)
Takeuchi, Tsutomu T.; Ishii, Takako T.
2004-03-01
Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.
Toxicity effects of an environmental realistic herbicide mixture on the seagrass Zostera noltei.
Diepens, Noël J; Buffan-Dubau, Evelyne; Budzinski, Hélène; Kallerhoff, Jean; Merlina, Georges; Silvestre, Jérome; Auby, Isabelle; Nathalie Tapie; Elger, Arnaud
2017-03-01
Worldwide seagrass declines have been observed due to multiple stressors. One of them is the mixture of pesticides used in intensive agriculture and boat antifouling paints in coastal areas. Effects of mixture toxicity are complex and poorly understood. However, consideration of mixture toxicity is more realistic and ecologically relevant for environmental risk assessment (ERA). The first aim of this study was to determine short-term effects of realistic herbicide mixture exposure on physiological endpoints of Zostera noltei. The second aim was to assess the environmental risks of this mixture, by comparing the results to previously published data. Z. noltei was exposed to a mixture of four herbicides: atrazine, diuron, irgarol and S-metolachlor, simulating the composition of typical cocktail of contaminants in the Arcachon bay (Atlantic coast, France). Three stress biomarkers were measured: enzymatic activity of glutathione reductase, effective quantum yield (EQY) and photosynthetic pigment composition after 6, 24 and 96 h. Short term exposure to realistic herbicide mixtures affected EQY, with almost 100% inhibition for the two highest concentrations, and photosynthetic pigments. Effect on pigment composition was detected after 6 h with a no observed effect concentration (NOEC) of 1 μg/L total mixture concentration. The lowest EQY effect concentration at 10% (EC 10 ) (2 μg/L) and pigment composition NOEC with an assessment factor of 10 were above the maximal field concentrations along the French Atlantic coast, suggesting that there are no potential short term adverse effects of this particular mixture on Z. noltei. However, chronic effects on photosynthesis may lead to reduced energy reserves, which could thus lead to effects at whole plant and population level. Understanding the consequences of chemical mixtures could help to improve ERA and enhance management strategies to prevent further declines of seagrass meadows worldwide. Copyright © 2016. Published by Elsevier Ltd.
34 CFR 606.22 - What are the selection criteria for development grants?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Realistic and defined in terms of measurable results; and (2) Directly related to the problems to be solved...) The strengths, weaknesses, and significant problems of the institution's academic programs...
Jet Noise Source Localization Using Linear Phased Array
NASA Technical Reports Server (NTRS)
Agboola, Ferni A.; Bridges, James
2004-01-01
A study was conducted to further clarify the interpretation and application of linear phased array microphone results, for localizing aeroacoustics sources in aircraft exhaust jet. Two model engine nozzles were tested at varying power cycles with the array setup parallel to the jet axis. The array position was varied as well to determine best location for the array. The results showed that it is possible to resolve jet noise sources with bypass and other components separation. The results also showed that a focused near field image provides more realistic noise source localization at low to mid frequencies.
Adams, Alayne; Sedalia, Saroj; McNab, Shanon; Sarker, Malabika
2016-03-01
Realist evaluation furnishes valuable insight to public health practitioners and policy makers about how and why interventions work or don't work. Moving beyond binary measures of success or failure, it provides a systematic approach to understanding what goes on in the 'Black Box' and how implementation decisions in real life contexts can affect intervention effectiveness. This paper reflects on an experience in applying the tenets of realist evaluation to identify optimal implementation strategies for scale-up of Maternal and Newborn Health (MNH) programmes in rural Bangladesh. Supported by UNICEF, the three MNH programmes under consideration employed different implementation models to deliver similar services and meet similar MNH goals. Programme targets included adoption of recommended antenatal, post-natal and essential newborn care practices; health systems strengthening through improved referral, accountability and administrative systems, and increased community knowledge. Drawing on focused examples from this research, seven steps for operationalizing the realist evaluation approach are offered, while emphasizing the need to iterate and innovate in terms of methods and analysis strategies. The paper concludes by reflecting on lessons learned in applying realist evaluation, and the unique insights it yields regarding implementation strategies for successful MNH programming. © The Author 2015. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
Andersson, M Gunnar; Tomuzia, Katharina; Löfström, Charlotta; Appel, Bernd; Bano, Luca; Keremidis, Haralampos; Knutsson, Rickard; Leijon, Mikael; Lövgren, Susanna Ekströmer; De Medici, Dario; Menrath, Andrea; van Rotterdam, Bart J; Wisselink, Henk J; Barker, Gary C
2013-09-01
Preparedness for bioterrorism is based on communication between people in organizations who are educated and trained in several disciplines, including law enforcement, health, and science. Various backgrounds, cultures, and vocabularies generate difficulties in understanding and interpretating terms and concepts, which may impair communication. This is especially true in emergency situations, in which the need for clarity and consistency is vital. The EU project AniBioThreat initiated methods and made a rough estimate of the terms and concepts that are crucial for an incident, and a pilot database with key terms and definitions has been constructed. Analysis of collected terms and sources has shown that many of the participating organizations use various international standards in their area of expertise. The same term often represents different concepts in the standards from different sectors, or, alternatively, different terms were used to represent the same or similar concepts. The use of conflicting terminology can be problematic for decision makers and communicators in planning and prevention or when handling an incident. Since the CBRN area has roots in multiple disciplines, each with its own evolving terminology, it may not be realistic to achieve unequivocal communication through a standardized vocabulary and joint definitions for words from common language. We suggest that a communication strategy should include awareness of alternative definitions and ontologies and the ability to talk and write without relying on the implicit knowledge underlying specialized jargon. Consequently, cross-disciplinary communication skills should be part of training of personnel in the CBRN field. In addition, a searchable repository of terms and definitions from relevant organizations and authorities would be a valuable addition to existing glossaries for improving awareness concerning bioterrorism prevention planning.
Tomuzia, Katharina; Löfström, Charlotta; Appel, Bernd; Bano, Luca; Keremidis, Haralampos; Knutsson, Rickard; Leijon, Mikael; Lövgren, Susanna Ekströmer; De Medici, Dario; Menrath, Andrea; van Rotterdam, Bart J.; Wisselink, Henk J.; Barker, Gary C.
2013-01-01
Preparedness for bioterrorism is based on communication between people in organizations who are educated and trained in several disciplines, including law enforcement, health, and science. Various backgrounds, cultures, and vocabularies generate difficulties in understanding and interpretating terms and concepts, which may impair communication. This is especially true in emergency situations, in which the need for clarity and consistency is vital. The EU project AniBioThreat initiated methods and made a rough estimate of the terms and concepts that are crucial for an incident, and a pilot database with key terms and definitions has been constructed. Analysis of collected terms and sources has shown that many of the participating organizations use various international standards in their area of expertise. The same term often represents different concepts in the standards from different sectors, or, alternatively, different terms were used to represent the same or similar concepts. The use of conflicting terminology can be problematic for decision makers and communicators in planning and prevention or when handling an incident. Since the CBRN area has roots in multiple disciplines, each with its own evolving terminology, it may not be realistic to achieve unequivocal communication through a standardized vocabulary and joint definitions for words from common language. We suggest that a communication strategy should include awareness of alternative definitions and ontologies and the ability to talk and write without relying on the implicit knowledge underlying specialized jargon. Consequently, cross-disciplinary communication skills should be part of training of personnel in the CBRN field. In addition, a searchable repository of terms and definitions from relevant organizations and authorities would be a valuable addition to existing glossaries for improving awareness concerning bioterrorism prevention planning. PMID:23971818
An Optically Implemented Kalman Filter Algorithm.
1983-12-01
8b. OFFICE SYMOOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER 8c. ADDRESS (City, State and ZIP Code ) 10. SOURCE OF FUNDING NOS.______ PROGRAM...are completely speci- fied for the correlation stage to perform the required corre- lation in real time, and the filter stage to perform the lin- ear...performance analy- ses indicated an enhanced ability of the nonadaptive filter to track a realistic distant point source target with an error standard
NASA Astrophysics Data System (ADS)
Italiano, Antonio; Amato, Ernesto; Auditore, Lucrezia; Baldari, Sergio
2018-05-01
The accurate evaluation of the radiation burden associated with radiation absorbed doses to the skin of the extremities during the manipulation of radioactive sources is a critical issue in operational radiological protection, deserving the most accurate calculation approaches available. Monte Carlo simulation of the radiation transport and interaction is the gold standard for the calculation of dose distributions in complex geometries and in presence of extended spectra of multi-radiation sources. We propose the use of Monte Carlo simulations in GAMOS, in order to accurately estimate the dose to the extremities during manipulation of radioactive sources. We report the results of these simulations for 90Y, 131I, 18F and 111In nuclides in water solutions enclosed in glass or plastic receptacles, such as vials or syringes. Skin equivalent doses at 70 μm of depth and dose-depth profiles are reported for different configurations, highlighting the importance of adopting a realistic geometrical configuration in order to get accurate dosimetric estimations. Due to the easiness of implementation of GAMOS simulations, case-specific geometries and nuclides can be adopted and results can be obtained in less than about ten minutes of computation time with a common workstation.
Realistic simplified gaugino-higgsino models in the MSSM
NASA Astrophysics Data System (ADS)
Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn
2018-03-01
We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.
Munar, Wolfgang; Wahid, Syed S; Curry, Leslie
2018-01-03
Background . Improving performance of primary care systems in low- and middle-income countries (LMICs) may be a necessary condition for achievement of universal health coverage in the age of Sustainable Development Goals. The Salud Mesoamerica Initiative (SMI), a large-scale, multi-country program that uses supply-side financial incentives directed at the central-level of governments, and continuous, external evaluation of public, health sector performance to induce improvements in primary care performance in eight LMICs. This study protocol seeks to explain whether and how these interventions generate program effects in El Salvador and Honduras. Methods . This study presents the protocol for a study that uses a realist evaluation approach to develop a preliminary program theory that hypothesizes the interactions between context, interventions and the mechanisms that trigger outcomes. The program theory was completed through a scoping review of relevant empirical, peer-reviewed and grey literature; a sense-making workshop with program stakeholders; and content analysis of key SMI documents. The study will use a multiple case-study design with embedded units with contrasting cases. We define as a case the two primary care systems of Honduras and El Salvador, each with different context characteristics. Data will be collected through in-depth interviews with program actors and stakeholders, documentary review, and non-participatory observation. Data analysis will use inductive and deductive approaches to identify causal patterns organized as 'context, mechanism, outcome' configurations. The findings will be triangulated with existing secondary, qualitative and quantitative data sources, and contrasted against relevant theoretical literature. The study will end with a refined program theory. Findings will be published following the guidelines generated by the Realist and Meta-narrative Evidence Syntheses study (RAMESES II). This study will be performed contemporaneously with SMI's mid-term stage of implementation. Of the methods described, the preliminary program theory has been completed. Data collection, analysis and synthesis remain to be completed.
Lancelot, Christiane; Thieu, Vincent; Polard, Audrey; Garnier, Josette; Billen, Gilles; Hecq, Walter; Gypens, Nathalie
2011-05-01
Nutrient reduction measures have been already taken by wealthier countries to decrease nutrient loads to coastal waters, in most cases however, prior to having properly assessed their ecological effectiveness and their economic costs. In this paper we describe an original integrated impact assessment methodology to estimate the direct cost and the ecological performance of realistic nutrient reduction options to be applied in the Southern North Sea watershed to decrease eutrophication, visible as Phaeocystis blooms and foam deposits on the beaches. The mathematical tool couples the idealized biogeochemical GIS-based model of the river system (SENEQUE-RIVERSTRAHLER) implemented in the Eastern Channel/Southern North Sea watershed to the biogeochemical MIRO model describing Phaeocystis blooms in the marine domain. Model simulations explore how nutrient reduction options regarding diffuse and/or point sources in the watershed would affect the Phaeocystis colony spreading in the coastal area. The reference and prospective simulations are performed for the year 2000 characterized by mean meteorological conditions, and nutrient reduction scenarios include and compare upgrading of wastewater treatment plants and changes in agricultural practices including an idealized shift towards organic farming. A direct cost assessment is performed for each realistic nutrient reduction scenario. Further the reduction obtained for Phaeocystis blooms is assessed by comparison with ecological indicators (bloom magnitude and duration) and the cost for reducing foam events on the beaches is estimated. Uncertainty brought by the added effect of meteorological conditions (rainfall) on coastal eutrophication is discussed. It is concluded that the reduction obtained by implementing realistic environmental measures on the short-term is costly and insufficient to restore well-balanced nutrient conditions in the coastal area while the replacement of conventional agriculture by organic farming might be an option to consider in the nearby future. Copyright © 2011 Elsevier B.V. All rights reserved.
Munar, Wolfgang; Wahid, Syed S.; Curry, Leslie
2018-01-01
Background. Improving performance of primary care systems in low- and middle-income countries (LMICs) may be a necessary condition for achievement of universal health coverage in the age of Sustainable Development Goals. The Salud Mesoamerica Initiative (SMI), a large-scale, multi-country program that uses supply-side financial incentives directed at the central-level of governments, and continuous, external evaluation of public, health sector performance to induce improvements in primary care performance in eight LMICs. This study protocol seeks to explain whether and how these interventions generate program effects in El Salvador and Honduras. Methods. This study presents the protocol for a study that uses a realist evaluation approach to develop a preliminary program theory that hypothesizes the interactions between context, interventions and the mechanisms that trigger outcomes. The program theory was completed through a scoping review of relevant empirical, peer-reviewed and grey literature; a sense-making workshop with program stakeholders; and content analysis of key SMI documents. The study will use a multiple case-study design with embedded units with contrasting cases. We define as a case the two primary care systems of Honduras and El Salvador, each with different context characteristics. Data will be collected through in-depth interviews with program actors and stakeholders, documentary review, and non-participatory observation. Data analysis will use inductive and deductive approaches to identify causal patterns organized as ‘context, mechanism, outcome’ configurations. The findings will be triangulated with existing secondary, qualitative and quantitative data sources, and contrasted against relevant theoretical literature. The study will end with a refined program theory. Findings will be published following the guidelines generated by the Realist and Meta-narrative Evidence Syntheses study (RAMESES II). This study will be performed contemporaneously with SMI’s mid-term stage of implementation. Of the methods described, the preliminary program theory has been completed. Data collection, analysis and synthesis remain to be completed. PMID:29431181
Jagosh, Justin; Bush, Paula L; Salsberg, Jon; Macaulay, Ann C; Greenhalgh, Trish; Wong, Geoff; Cargo, Margaret; Green, Lawrence W; Herbert, Carol P; Pluye, Pierre
2015-07-30
Community-Based Participatory Research (CBPR) is an approach in which researchers and community stakeholders form equitable partnerships to tackle issues related to community health improvement and knowledge production. Our 2012 realist review of CBPR outcomes reported long-term effects that were touched upon but not fully explained in the retained literature. To further explore such effects, interviews were conducted with academic and community partners of partnerships retained in the review. Realist methodology was used to increase the understanding of what supports partnership synergy in successful long-term CBPR partnerships, and to further document how equitable partnerships can result in numerous benefits including the sustainability of relationships, research and solutions. Building on our previous realist review of CBPR, we contacted the authors of longitudinal studies of academic-community partnerships retained in the review. Twenty-four participants (community members and researchers) from 11 partnerships were interviewed. Realist logic of analysis was used, involving middle-range theory, context-mechanism-outcome configuration (CMOcs) and the concept of the 'ripple effect'. The analysis supports the central importance of developing and strengthening partnership synergy through trust. The ripple effect concept in conjunction with CMOcs showed that a sense of trust amongst CBPR members was a prominent mechanism leading to partnership sustainability. This in turn resulted in population-level outcomes including: (a) sustaining collaborative efforts toward health improvement; (b) generating spin-off projects; and (c) achieving systemic transformations. These results add to other studies on improving the science of CBPR in partnerships with a high level of power-sharing and co-governance. Our results suggest sustaining CBPR and achieving unanticipated benefits likely depend on trust-related mechanisms and a continuing commitment to power-sharing. These findings have implications for building successful CBPR partnerships to address challenging public health problems and the complex assessment of outcomes.
Building a drug ontology based on RxNorm and other sources
2013-01-01
Background We built the Drug Ontology (DrOn) because we required correct and consistent drug information in a format for use in semantic web applications, and no existing resource met this requirement or could be altered to meet it. One of the obstacles we faced when creating DrOn was the difficulty in reusing drug information from existing sources. The primary external source we have used at this stage in DrOn’s development is RxNorm, a standard drug terminology curated by the National Library of Medicine (NLM). To build DrOn, we (1) mined data from historical releases of RxNorm and (2) mapped many RxNorm entities to Chemical Entities of Biological Interest (ChEBI) classes, pulling relevant information from ChEBI while doing so. Results We built DrOn in a modular fashion to facilitate simpler extension and development of the ontology and to allow reasoning and construction to scale. Classes derived from each source are serialized in separate modules. For example, the classes in DrOn that are programmatically derived from RxNorm are stored in a separate module and subsumed by classes in a manually-curated, realist, upper-level module of DrOn with terms such as 'clinical drug role’, 'tablet’, 'capsule’, etc. Conclusions DrOn is a modular, extensible ontology of drug products, their ingredients, and their biological activity that avoids many of the fundamental flaws found in other, similar artifacts and meets the requirements of our comparative-effectiveness research use-case. PMID:24345026
Building a drug ontology based on RxNorm and other sources.
Hanna, Josh; Joseph, Eric; Brochhausen, Mathias; Hogan, William R
2013-12-18
We built the Drug Ontology (DrOn) because we required correct and consistent drug information in a format for use in semantic web applications, and no existing resource met this requirement or could be altered to meet it. One of the obstacles we faced when creating DrOn was the difficulty in reusing drug information from existing sources. The primary external source we have used at this stage in DrOn's development is RxNorm, a standard drug terminology curated by the National Library of Medicine (NLM). To build DrOn, we (1) mined data from historical releases of RxNorm and (2) mapped many RxNorm entities to Chemical Entities of Biological Interest (ChEBI) classes, pulling relevant information from ChEBI while doing so. We built DrOn in a modular fashion to facilitate simpler extension and development of the ontology and to allow reasoning and construction to scale. Classes derived from each source are serialized in separate modules. For example, the classes in DrOn that are programmatically derived from RxNorm are stored in a separate module and subsumed by classes in a manually-curated, realist, upper-level module of DrOn with terms such as 'clinical drug role', 'tablet', 'capsule', etc. DrOn is a modular, extensible ontology of drug products, their ingredients, and their biological activity that avoids many of the fundamental flaws found in other, similar artifacts and meets the requirements of our comparative-effectiveness research use-case.
The rotation-powered nature of some soft gamma-ray repeaters and anomalous X-ray pulsars
NASA Astrophysics Data System (ADS)
Coelho, Jaziel G.; Cáceres, D. L.; de Lima, R. C. R.; Malheiro, M.; Rueda, J. A.; Ruffini, R.
2017-03-01
Context. Soft gamma-ray repeaters (SGRs) and anomalous X-ray pulsars (AXPs) are slow rotating isolated pulsars whose energy reservoir is still matter of debate. Adopting neutron star (NS) fiducial parameters; mass M = 1.4 M⊙, radius R = 10 km, and moment of inertia, I = 1045 g cm2, the rotational energy loss, Ėrot, is lower than the observed luminosity (dominated by the X-rays) LX for many of the sources. Aims: We investigate the possibility that some members of this family could be canonical rotation-powered pulsars using realistic NS structure parameters instead of fiducial values. Methods: We compute the NS mass, radius, moment of inertia and angular momentum from numerical integration of the axisymmetric general relativistic equations of equilibrium. We then compute the entire range of allowed values of the rotational energy loss, Ėrot, for the observed values of rotation period P and spin-down rate Ṗ. We also estimate the surface magnetic field using a general relativistic model of a rotating magnetic dipole. Results: We show that realistic NS parameters lowers the estimated value of the magnetic field and radiation efficiency, LX/Ėrot, with respect to estimates based on fiducial NS parameters. We show that nine SGRs/AXPs can be described as canonical pulsars driven by the NS rotational energy, for LX computed in the soft (2-10 keV) X-ray band. We compute the range of NS masses for which LX/Ėrot< 1. We discuss the observed hard X-ray emission in three sources of the group of nine potentially rotation-powered NSs. This additional hard X-ray component dominates over the soft one leading to LX/Ėrot > 1 in two of them. Conclusions: We show that 9 SGRs/AXPs can be rotation-powered NSs if we analyze their X-ray luminosity in the soft 2-10 keV band. Interestingly, four of them show radio emission and six have been associated with supernova remnants (including Swift J1834.9-0846 the first SGR observed with a surrounding wind nebula). These observations give additional support to our results of a natural explanation of these sources in terms of ordinary pulsars. Including the hard X-ray emission observed in three sources of the group of potential rotation-powered NSs, this number of sources with LX/Ėrot< 1 becomes seven. It remains open to verification 1) the accuracy of the estimated distances and 2) the possible contribution of the associated supernova remnants to the hard X-ray emission.
NASA Astrophysics Data System (ADS)
Ángel López Comino, José; Cesca, Simone; Kriegerowski, Marius; Heimann, Sebastian; Dahm, Torsten; Mirek, Janusz; Lasocky, Stanislaw
2017-04-01
Previous analysis to assess the monitoring performance of a dedicated seismic network are always useful to determine its capability of detecting, locating and characterizing target seismicity. This work focuses on a hydrofracking experiment in Poland, which is monitored in the framework of the SHEER (SHale gas Exploration and Exploitation induced Risks) EU project. The seismic installation is located near Wysin (Poland), in the central-western part of the Peribaltic synclise at Pomerania. The network setup includes a distributed network of six broadband stations, three shallow borehole stations and three small-scale arrays. We assess the monitoring performance prior operations, using synthetic seismograms. Realistic full waveform are generated and combined with real noise before fracking operations, to produce either event based or continuous synthetic waveforms. Background seismicity is modelled by double couple (DC) focal mechanisms. Non-DC sources resemble induced tensile fractures opening in the direction of the minimal compressive stress and closing in the same direction after the injection. Microseismic sources are combined with a realistic crustal model, distribution of hypocenters, magnitudes and source durations. The network detection performance is then assessed in terms of Magnitude of Completeness (Mc) through two different techniques: i) using an amplitude threshold approach, taking into account a station dependent noise level and different values of signal-to-noise ratio (SNR) and ii) through the application of an automatic detection algorithm to the continuous synthetic dataset. In the first case, we compare the maximal amplitude of noise free synthetic waveforms with the different noise levels. Imposing the simultaneous detection at e.g. 4 stations for a robust detection, the Mc is assessed and can be adjusted by empirical relationships for different SNR values. We find that different source mechanisms have different detection threshold. The background seismicity (DC sources) is better detectable than induced earthquakes (tensile cracks mechanisms). Assuming a SNR of 2, we estimate a Mc 0.55 around the fracking wells, with an increase of 0.05 during day hours. The value of Mc can be decreased to 0.45 around the fracking region, taking advantage by the array installations. The second approach applies a full waveform detection and location algorithm based on the stacking of smooth characteristic function and the identification of high coherence in the signals recorded at different stations. In this case the detection can be increased at the cost of increasing also false detections, with an acceptable compromise found for Mc 0.1.
Development of Turbulent Biological Closure Parameterizations
2011-09-30
LONG-TERM GOAL: The long-term goals of this project are: (1) to develop a theoretical framework to quantify turbulence induced NPZ interactions. (2) to apply the theory to develop parameterizations to be used in realistic environmental physical biological coupling numerical models. OBJECTIVES: Connect the Goodman and Robinson (2008) statistically based pdf theory to Advection Diffusion Reaction (ADR) modeling of NPZ interaction.
ERIC Educational Resources Information Center
California State Univ., Sacramento. Academic Senate.
This guide communicates the increasing need in California for young people to develop skills in languages other than English, and to direct attention to assessing these skills in terms of language use competency in realistic situations rather than in terms of courses taken or units earned. The guide recognizes successive stages of competency and…
Reproducing scalar mixing of turbulent jets in a 3D periodic box
NASA Astrophysics Data System (ADS)
Rah, K. Jeff; Blanquart, Guillaume
2017-11-01
A triply periodic DNS is a convenient framework to analyze the turbulent mixing process, since it can produce statistically stationary turbulence. In addition, the periodic boundary condition makes it easy to compute the spatial spectra of scalars. However, it is difficult to create a realistic turbulent flow with such a geometry. In this current investigation, we aim to develop a method to simulate a realistic turbulent mixing process inside a 3D periodic box. The target real flow is an axisymmetric jet with passive scalars on its centerline. The velocity and scalar information of turbulent jets on the centerline is applied to the momentum equation and scalar transport equation in physical space. The result is the combination of a mean gradient term and a linear forcing term in the scalar equation. These new forcing terms are derived to replicate the scalar mixing properties of jets in a triply periodic DNS. The present analysis differs from other forcing schemes for their derivation process did not involve any use of the velocity or scalar information of a real turbulent flow. A set of DNS has been performed with the new forcing term, and various turbulent parameters and spectral relations are compared against experiments.
Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.
Fischmeister, Florian Ph S; Bauer, Herbert
2006-10-01
Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
Toward Millimagnitude Photometric Calibration (Abstract)
NASA Astrophysics Data System (ADS)
Dose, E.
2014-12-01
(Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.
Inter-Individual Variability in High-Throughput Risk Prioritization of Environmental Chemicals (Sot)
We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have...
IndeCut evaluates performance of network motif discovery algorithms.
Ansariola, Mitra; Megraw, Molly; Koslicki, David
2018-05-01
Genomic networks represent a complex map of molecular interactions which are descriptive of the biological processes occurring in living cells. Identifying the small over-represented circuitry patterns in these networks helps generate hypotheses about the functional basis of such complex processes. Network motif discovery is a systematic way of achieving this goal. However, a reliable network motif discovery outcome requires generating random background networks which are the result of a uniform and independent graph sampling method. To date, there has been no method to numerically evaluate whether any network motif discovery algorithm performs as intended on realistically sized datasets-thus it was not possible to assess the validity of resulting network motifs. In this work, we present IndeCut, the first method to date that characterizes network motif finding algorithm performance in terms of uniform sampling on realistically sized networks. We demonstrate that it is critical to use IndeCut prior to running any network motif finder for two reasons. First, IndeCut indicates the number of samples needed for a tool to produce an outcome that is both reproducible and accurate. Second, IndeCut allows users to choose the tool that generates samples in the most independent fashion for their network of interest among many available options. The open source software package is available at https://github.com/megrawlab/IndeCut. megrawm@science.oregonstate.edu or david.koslicki@math.oregonstate.edu. Supplementary data are available at Bioinformatics online.
Benefits and Limitations of Real Options Analysis for the Practice of River Flood Risk Management
NASA Astrophysics Data System (ADS)
Kind, Jarl M.; Baayen, Jorn H.; Botzen, W. J. Wouter
2018-04-01
Decisions on long-lived flood risk management (FRM) investments are complex because the future is uncertain. Flexibility and robustness can be used to deal with future uncertainty. Real options analysis (ROA) provides a welfare-economics framework to design and evaluate robust and flexible FRM strategies under risk or uncertainty. Although its potential benefits are large, ROA is hardly used in todays' FRM practice. In this paper, we investigate benefits and limitations of a ROA, by applying it to a realistic FRM case study for an entire river branch. We illustrate how ROA identifies optimal short-term investments and values future options. We develop robust dike investment strategies and value the flexibility offered by additional room for the river measures. We benchmark the results of ROA against those of a standard cost-benefit analysis and show ROA's potential policy implications. The ROA for a realistic case requires a high level of geographical detail, a large ensemble of scenarios, and the inclusion of stakeholders' preferences. We found several limitations of applying the ROA. It is complex. In particular, relevant sources of uncertainty need to be recognized, quantified, integrated, and discretized in scenarios, requiring subjective choices and expert judgment. Decision trees have to be generated and stakeholders' preferences have to be translated into decision rules. On basis of this study, we give general recommendations to use high discharge scenarios for the design of measures with high fixed costs and few alternatives. Lower scenarios may be used when alternatives offer future flexibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
NASA Astrophysics Data System (ADS)
Tsyganenko, Nikolai
2013-04-01
A new advanced model of the dynamical geomagnetosphere is presented, based on a large set of data from Geotail, Cluster, Polar, and Themis missions, taken during 138 storm events with SYM-H from -40 to -487nT over the period from 1996 through 2012 in the range of geocentric distances from ~3Re to ~60Re. The model magnetic field is confined within a realistic magnetopause, based on Lin et al. [JGRA, v.115, A04207, 2010] empirical boundary, driven by the dipole tilt angle, solar wind pressure, and IMF Bz. The magnetic field is modeled as a flexible combination of several modules, representing contributions from principal magnetospheric current systems such as the symmetric and partial ring currents (SRC/PRC), Region 1 and 2 field-aligned currents (FAC), and the equatorial tail current sheet (TCS). In the inner magnetosphere the model field is dominated by contributions from the SRC and PRC, derived from realistic particle pressure models and represented by four modules, providing variable degree of dawn-dusk and noon-midnight asymmetry. The TCS field is comprised of several independent modules, ensuring sufficient flexibility of the model field and correct asymptotic values in the distant tail. The Region 2 FAC is an inherent part of the PRC, derived from the continuity of the azimuthal current. The Region 1 FAC is modulated by the diurnal and seasonal variations of the dipole tilt angle, in agreement with earlier statistical studies [Ohtani et al., JGRA, v.110, A09230, 2005]. Following the approach introduced in our earlier TS05 model [Tsyganenko and Sitnov, JGRA, v.110, A03208, 2005], contributions from all individual field sources are parameterized by the external driving functions, derived from the solar wind/IMF OMNI database as solutions of dynamic equations with source and loss terms in the right-hand side. Global magnetic configurations and their evolution during magnetospheric storms are analyzed and discussed in context of the model results.
NASA Astrophysics Data System (ADS)
Wollherr, Stephanie; Gabriel, Alice-Agnes; Uphoff, Carsten
2018-05-01
The dynamics and potential size of earthquakes depend crucially on rupture transfers between adjacent fault segments. To accurately describe earthquake source dynamics, numerical models can account for realistic fault geometries and rheologies such as nonlinear inelastic processes off the slip interface. We present implementation, verification, and application of off-fault Drucker-Prager plasticity in the open source software SeisSol (www.seissol.org). SeisSol is based on an arbitrary high-order derivative modal Discontinuous Galerkin (ADER-DG) method using unstructured, tetrahedral meshes specifically suited for complex geometries. Two implementation approaches are detailed, modelling plastic failure either employing sub-elemental quadrature points or switching to nodal basis coefficients. At fine fault discretizations the nodal basis approach is up to 6 times more efficient in terms of computational costs while yielding comparable accuracy. Both methods are verified in community benchmark problems and by three dimensional numerical h- and p-refinement studies with heterogeneous initial stresses. We observe no spectral convergence for on-fault quantities with respect to a given reference solution, but rather discuss a limitation to low-order convergence for heterogeneous 3D dynamic rupture problems. For simulations including plasticity, a high fault resolution may be less crucial than commonly assumed, due to the regularization of peak slip rate and an increase of the minimum cohesive zone width. In large-scale dynamic rupture simulations based on the 1992 Landers earthquake, we observe high rupture complexity including reverse slip, direct branching, and dynamic triggering. The spatio-temporal distribution of rupture transfers are altered distinctively by plastic energy absorption, correlated with locations of geometrical fault complexity. Computational cost increases by 7% when accounting for off-fault plasticity in the demonstrating application. Our results imply that the combination of fully 3D dynamic modelling, complex fault geometries, and off-fault plastic yielding is important to realistically capture dynamic rupture transfers in natural fault systems.
Spontaneous emission in the presence of a realistically sized cylindrical waveguide
NASA Astrophysics Data System (ADS)
Dung, Ho Trung
2016-02-01
Various quantities characterizing the spontaneous emission process of a dipole emitter including the emission rate and the emission pattern can be expressed in terms of the Green tensor of the surrounding environment. By expanding the Green tensor around some analytically known background one as a Born series, and truncating it under appropriate conditions, complicated boundaries can be tackled with ease. However, when the emitter is embedded in the medium, even the calculation of the first-order term in the Born series is problematic because of the presence of a singularity. We show how to eliminate this singularity for a medium of arbitrary size and shape by expanding around the bulk medium rather than vacuum. In the highly symmetric configuration of an emitter located on the axis of a realistically sized cylinder, it is shown that the singularity can be removed by changing the integral variables and then the order of integration. Using both methods, we investigate the spontaneous emission rate of an initially excited two-level dipole emitter, embedded in a realistically sized cylinder, which can be a common optical fiber in the long-length limit and a disk in the short-length limit. The spatial distribution of the emitted light is calculated using the Born-expansion approach, and local-field corrections to the spontaneous emission rate are briefly discussed.
NASA Astrophysics Data System (ADS)
Kosmas, T. S.; Papoulias, D. K.; Tórtola, M.; Valle, J. W. F.
2017-09-01
We investigate the impact of a fourth sterile neutrino at reactor and Spallation Neutron Source neutrino detectors. Specifically, we explore the discovery potential of the TEXONO and COHERENT experiments to subleading sterile neutrino effects through the measurement of the coherent elastic neutrino-nucleus scattering event rate. Our dedicated χ2-sensitivity analysis employs realistic nuclear structure calculations adequate for high purity sub-keV threshold Germanium detectors.
Do absorption and realistic distraction influence performance of component task surgical procedure?
Pluyter, Jon R; Buzink, Sonja N; Rutkowski, Anne-F; Jakimowicz, Jack J
2010-04-01
Surgeons perform complex tasks while exposed to multiple distracting sources that may increase stress in the operating room (e.g., music, conversation, and unadapted use of sophisticated technologies). This study aimed to examine whether such realistic social and technological distracting conditions may influence surgical performance. Twelve medical interns performed a laparoscopic cholecystectomy task with the Xitact LC 3.0 virtual reality simulator under distracting conditions (exposure to music, conversation, and nonoptimal handling of the laparoscope) versus nondistracting conditions (control condition) as part of a 2 x 2 within-subject experimental design. Under distracting conditions, the medical interns showed a significant decline in task performance (overall task score, task errors, and operating time) and significantly increased levels of irritation toward both the assistant handling the laparoscope in a nonoptimal way and the sources of social distraction. Furthermore, individual differences in cognitive style (i.e., cognitive absorption and need for cognition) significantly influenced the levels of irritation experienced by the medical interns. The results suggest careful evaluation of the social and technological sources of distraction in the operation room to reduce irritation for the surgeon and provision of proper preclinical laparoscope navigation training to increase security for the patient.
LEO-to-ground polarization measurements aiming for space QKD using Small Optical TrAnsponder (SOTA).
Carrasco-Casado, Alberto; Kunimori, Hiroo; Takenaka, Hideki; Kubo-Oka, Toshihiro; Akioka, Maki; Fuse, Tetsuharu; Koyama, Yoshisada; Kolev, Dimitar; Munemasa, Yasushi; Toyoshima, Morio
2016-05-30
Quantum communication, and more specifically Quantum Key Distribution (QKD), enables the transmission of information in a theoretically secure way, guaranteed by the laws of quantum physics. Although fiber-based QKD has been readily available since several years ago, a global quantum communication network will require the development of space links, which remains to be demonstrated. NICT launched a LEO satellite in 2014 carrying a lasercom terminal (SOTA), designed for in-orbit technological demonstrations. In this paper, we present the results of the campaign to measure the polarization characteristics of the SOTA laser sources after propagating from LEO to ground. The most-widely used property for encoding information in free-space QKD is the polarization, and especially the linear polarization. Therefore, studying its behavior in a realistic link is a fundamental step for proving the feasibility of space quantum communications. The results of the polarization preservation of two highly-polarized lasers are presented here, including the first-time measurement of a linearly-polarized source at λ = 976 nm and a circularly-polarized source at λ = 1549 nm from space using a realistic QKD-like receiver, installed in the Optical Ground Station at the NICT Headquarters, in Tokyo, Japan.
Evaluation of realistic layouts for next generation on-scalp MEG: spatial information density maps.
Riaz, Bushra; Pfeiffer, Christoph; Schneiderman, Justin F
2017-08-01
While commercial magnetoencephalography (MEG) systems are the functional neuroimaging state-of-the-art in terms of spatio-temporal resolution, MEG sensors have not changed significantly since the 1990s. Interest in newer sensors that operate at less extreme temperatures, e.g., high critical temperature (high-T c ) SQUIDs, optically-pumped magnetometers, etc., is growing because they enable significant reductions in head-to-sensor standoff (on-scalp MEG). Various metrics quantify the advantages of on-scalp MEG, but a single straightforward one is lacking. Previous works have furthermore been limited to arbitrary and/or unrealistic sensor layouts. We introduce spatial information density (SID) maps for quantitative and qualitative evaluations of sensor arrays. SID-maps present the spatial distribution of information a sensor array extracts from a source space while accounting for relevant source and sensor parameters. We use it in a systematic comparison of three practical on-scalp MEG sensor array layouts (based on high-T c SQUIDs) and the standard Elekta Neuromag TRIUX magnetometer array. Results strengthen the case for on-scalp and specifically high-T c SQUID-based MEG while providing a path for the practical design of future MEG systems. SID-maps are furthermore general to arbitrary magnetic sensor technologies and source spaces and can thus be used for design and evaluation of sensor arrays for magnetocardiography, magnetic particle imaging, etc.
Renewable resources in the chemical industry--breaking away from oil?
Nordhoff, Stefan; Höcker, Hans; Gebhardt, Henrike
2007-12-01
Rising prices for fossil-based raw materials suggest that sooner or later renewable raw materials will, in principle, become economically viable. This paper examines this widespread paradigm. Price linkages like those seen for decades particularly in connection with petrochemical raw materials are now increasingly affecting renewable raw materials. The main driving force is the competing utilisation as an energy source because both fossil-based and renewable raw materials are used primarily for heat, electrical power and mobility. As a result, prices are determined by energy utilisation. Simple observations show how prices for renewable carbon sources are becoming linked to the crude oil price. Whether the application calls for sugar, starch, virgin oils or lignocellulose, the price for the raw material rises with the oil price. Consequently, expectations regarding price trends for fossil-based energy sources can also be utilised for the valuation of alternative processes. However, this seriously calls into question the assumption that a rising crude oil price will favour the economic viability of alternative products and processes based on renewable raw materials. Conversely, it follows that these products and processes must demonstrate economic viability today. Especially in connection with new approaches in white biotechnology, it is evident that, under realistic assumptions, particularly in terms of achievable yields and the optimisation potential of the underlying processes, the route to utilisation is economically viable. This makes the paradigm mentioned at the outset at least very questionable.
Realistic Covariance Prediction for the Earth Science Constellation
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.
Charcoal as an alternative energy source. sub-project: briquetting of charcoal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enstad, G.G.
1982-02-02
Charcoal briquettes have been studied both theoretically and experimentally. It appears most realistic to use binders in solution. Binders of this kind have been examined and the briquettes' mechanical properties measured. Most promising are borresperse, gum arabic, dynolex, and wood tar.
Critical Perspectives on Methodology in Pedagogic Research
ERIC Educational Resources Information Center
Kahn, Peter
2015-01-01
The emancipatory dimension to higher education represents one of the sector's most compelling characteristics, but it remains important to develop understanding of the sources of determination that shape practice. Drawing on critical realist perspectives, we explore generative mechanisms by which methodology in pedagogic research affects the…
Coping with Stress in the Special Education Classroom.
ERIC Educational Resources Information Center
Brownell, Mary
1997-01-01
Discusses the stress that special education teachers may feel by role overload and lack of autonomy. Stress relieving strategies are described, including setting realistic expectations, making distinctions between the job and personal life, increasing autonomy, looking for alternative sources of reinforcement, increasing efficacy, and developing…
NASA Astrophysics Data System (ADS)
Hamilton, D. H.
2014-12-01
SourceURL:file://localhost/Users/joycehamilton/Desktop/Hamilton%20AGU%20abstractREV8-2-14.doc A recent issue of EOS featured the article "Active Faults and Nuclear Power Plants" (Chapman et.al., 2014). Although this article mainly reports on evaluations of fault hazard issues at Japan's Tsuruga NPP, it also includes a section on how the owner of the Diablo Canyon Nuclear Power Plant (DCNPP) in California, Pacific Gas and Electric Company (PG&E), is successfully responding to the evolving needs of seismic hazard assessment for that project. However, a review of the history of such assessment for the DCNPP project reveals a less benign situation, of which there is no hint in the EOS article. This history shows a long term pattern of collaborative efforts by PG&E and its operations and safety regulator, the US NRC, to maintain the operation of DCNPP using stratagems of non-recognition or non-acknowledgment of hazardous conditions, or of minimizing the postulated effects of such conditions by combinations of discovering new means of estimating ever lower levels of potential vibratory ground motions, and feeding the results into logic trees for a PRA which calculates the hazard down to levels acceptable to the NRC for the plant's continued operation. Such a result, however, can be made only if the geologic and seismologic reality of a very high level of seismic hazard to the facility is side stepped, down played, or dismissed. The actual pattern of late Quaternary—including contemporary—tectonism beneath and surrounding the DCNPP site, as shown on a realistic portrayal of geologic structures and active seismicity, is clearly at odds with such a conclusion, and with the statement in the EOS article that PG&E's Long Term Seismic Program "…has provided increased confidence that earthquakes occurring in central California are not likely to produce surprising or conflicting data."
NASA Astrophysics Data System (ADS)
Boddice, Daniel; Metje, Nicole; Tuckwell, George
2017-11-01
Geophysical surveying is widely used for the location of subsurface features. Current technology is limited in terms of its resolution (thus size of features it can detect) and penetration depth and a suitable technique is needed to bridge the gap between shallow near surface investigation using techniques such as EM conductivity mapping and GPR commonly used to map the upper 5 m below ground surface, and large features at greater depths detectable using conventional microgravity (> 5 m below ground surface). This will minimise the risks from unknown features buried in and conditions of the ground during civil engineering work. Quantum technology (QT) gravity sensors potentially offer a step-change in technology for locating features which lie outside of the currently detectable range in terms of size and depth, but that potential is currently unknown as field instruments have not been developed. To overcome this, a novel computer simulation was developed for a large range of different targets of interest. The simulation included realistic noise modelling of instrumental, environmental and location sources of noise which limit the accuracy of current microgravity measurements, in order to assess the potential capability of the new QT instruments in realistic situations and determine some of the likely limitations on their implementation. The results of the simulations for near surface features showed that the new technology is best employed in a gradiometer configuration as opposed to the traditional single sensor gravimeter used by current instruments due to the ability to suppress vibrational environmental noise effects due to common mode rejection between the sensors. A significant improvement in detection capability of 1.5-2 times was observed, putting targets such as mineshafts into the detectability zone which would be a major advantage for subsurface surveying. Thus this research, for the first time, has demonstrated clearly the benefits of QT gravity gradiometer sensors thereby increasing industry's confidence in this new technology.
Directive sources in acoustic discrete-time domain simulations based on directivity diagrams.
Escolano, José; López, José J; Pueo, Basilio
2007-06-01
Discrete-time domain methods provide a simple and flexible way to solve initial boundary value problems. With regard to the sources in such methods, only monopoles or dipoles can be considered. However, in many problems such as room acoustics, the radiation of realistic sources is directional-dependent and their directivity patterns have a clear influence on the total sound field. In this letter, a method to synthesize the directivity of sources is proposed, especially in cases where the knowledge is only based on discrete values of the directivity diagram. Some examples have been carried out in order to show the behavior and accuracy of the proposed method.
The Mock LISA Data Challenge Round 3: New and Improved Sources
NASA Technical Reports Server (NTRS)
Baker, John
2008-01-01
The Mock LISA Data Challenges are a program to demonstrate and encourage the development of data-analysis capabilities for LISA. Each round of challenges consists of several data sets containing simulated instrument noise and gravitational waves from sources of undisclosed parameters. Participants are asked to analyze the data sets and report the maximum information they can infer about the source parameters. The challenges are being released in rounds of increasing complexity and realism. Challenge 3. currently in progress, brings new source classes, now including cosmic-string cusps and primordial stochastic backgrounds, and more realistic signal models for supermassive black-hole inspirals and galactic double white dwarf binaries.
Experimental testing of the noise-canceling processor.
Collins, Michael D; Baer, Ralph N; Simpson, Harry J
2011-09-01
Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America
Pseudo-dynamic source characterization accounting for rough-fault effects
NASA Astrophysics Data System (ADS)
Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin
2016-04-01
Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.
NASA Technical Reports Server (NTRS)
Reale, Oreste; Achuthavarier, Deepthi; Fuentes, Marangelly; Putman, William M.; Partyka, Gary
2017-01-01
The National Aeronautics and Space Administration (NASA) Nature Run (NR), released for use in Observing System Simulation Experiments (OSSEs), is a 2-year long global non-hydrostatic free-running simulation at a horizontal resolution of 7 km, forced by observed sea-surface temperatures (SSTs) and sea ice, and inclusive of interactive aerosols and trace gases. This article evaluates the NR with respect to tropical cyclone (TC) activity. It is emphasized that to serve as a NR, a long-term simulation must be able to produce realistic TCs, which arise out of realistic large-scale forcings. The presence in the NR of the realistic, relevant dynamical features over the African Monsoon region and the tropical Atlantic is confirmed, along with realistic African Easterly Wave activity. The NR Atlantic TC seasons, produced with 2005 and 2006 SSTs, show interannual variability consistent with observations, with much stronger activity in 2005. An investigation of TC activity over all the other basins (eastern and western North Pacific, North and South Indian Ocean, and Australian region), together with relevant elements of the atmospheric circulation, such as, for example, the Somali Jet and westerly bursts, reveals that the model captures the fundamental aspects of TC seasons in every basin, producing realistic number of TCs with realistic tracks, life spans and structures. This confirms that the NASA NR is a very suitable tool for OSSEs targeting TCs and represents an improvement with respect to previous long simulations that have served the global atmospheric OSSE community.
Reale, Oreste; Achuthavarier, Deepthi; Fuentes, Marangelly; Putman, William M.; Partyka, Gary
2018-01-01
The National Aeronautics and Space Administration (NASA) Nature Run (NR), released for use in Observing System Simulation Experiments (OSSEs), is a 2-year long global non-hydrostatic free-running simulation at a horizontal resolution of 7 km, forced by observed sea-surface temperatures (SSTs) and sea ice, and inclusive of interactive aerosols and trace gases. This article evaluates the NR with respect to tropical cyclone (TC) activity. It is emphasized that to serve as a NR, a long-term simulation must be able to produce realistic TCs, which arise out of realistic large-scale forcings. The presence in the NR of the realistic, relevant dynamical features over the African Monsoon region and the tropical Atlantic is confirmed, along with realistic African Easterly Wave activity. The NR Atlantic TC seasons, produced with 2005 and 2006 SSTs, show interannual variability consistent with observations, with much stronger activity in 2005. An investigation of TC activity over all the other basins (eastern and western North Pacific, North and South Indian Ocean, and Australian region), together with relevant elements of the atmospheric circulation, such as, for example, the Somali Jet and westerly bursts, reveals that the model captures the fundamental aspects of TC seasons in every basin, producing realistic number of TCs with realistic tracks, life spans and structures. This confirms that the NASA NR is a very suitable tool for OSSEs targeting TCs and represents an improvement with respect to previous long simulations that have served the global atmospheric OSSE community. PMID:29674806
Reale, Oreste; Achuthavarier, Deepthi; Fuentes, Marangelly; Putman, William M; Partyka, Gary
2017-01-01
The National Aeronautics and Space Administration (NASA) Nature Run (NR), released for use in Observing System Simulation Experiments (OSSEs), is a 2-year long global non-hydrostatic free-running simulation at a horizontal resolution of 7 km, forced by observed sea-surface temperatures (SSTs) and sea ice, and inclusive of interactive aerosols and trace gases. This article evaluates the NR with respect to tropical cyclone (TC) activity. It is emphasized that to serve as a NR, a long-term simulation must be able to produce realistic TCs, which arise out of realistic large-scale forcings. The presence in the NR of the realistic, relevant dynamical features over the African Monsoon region and the tropical Atlantic is confirmed, along with realistic African Easterly Wave activity. The NR Atlantic TC seasons, produced with 2005 and 2006 SSTs, show interannual variability consistent with observations, with much stronger activity in 2005. An investigation of TC activity over all the other basins (eastern and western North Pacific, North and South Indian Ocean, and Australian region), together with relevant elements of the atmospheric circulation, such as, for example, the Somali Jet and westerly bursts, reveals that the model captures the fundamental aspects of TC seasons in every basin, producing realistic number of TCs with realistic tracks, life spans and structures. This confirms that the NASA NR is a very suitable tool for OSSEs targeting TCs and represents an improvement with respect to previous long simulations that have served the global atmospheric OSSE community.
A realistic view of the people in air traffic control.
DOT National Transportation Integrated Search
1974-12-01
An overview of research findings on air traffic controllers is presented. Results of personality, aptitude, motivation, interest, and attitude studies are considered in terms of the general pattern of characteristics found to be associated with succe...
1999-09-30
history. OBJECTIVES 1) Is the variability in a river’s sediment load, observed over the last 100 years or less, adequate to provide a proxy for longer-term...experiments, small basins are able to capture in terms of textural proxies , both the natural variability associated with precipitation and temperature...as well as realistic scenarios of abrupt climate change. Open ocean basins, like the Eel River, are less likely to record the proxy record of ambient
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.
Towards a systematic construction of realistic D-brane models on a del Pezzo singularity
NASA Astrophysics Data System (ADS)
Dolan, Matthew J.; Krippendorf, Sven; Quevedo, Fernando
2011-10-01
A systematic approach is followed in order to identify realistic D-brane models at toric del Pezzo singularities. Requiring quark and lepton spectrum and Yukawas from D3 branes and massless hypercharge, we are led to Pati-Salam extensions of the Standard Model. Hierarchies of masses, flavour mixings and control of couplings select higher order del Pezzo singularities, minimising the Higgs sector prefers toric del Pezzos with dP 3 providing the most successful compromise. Then a supersymmetric local string model is presented with the following properties at low energies: (i) the MSSM spectrum plus a local B - L gauge field or additional Higgs fields depending on the breaking pattern, (ii) a realistic hierarchy of quark and lepton masses and (iii) realistic flavour mixing between quark and lepton families with computable CKM and PMNS matrices, and CP violation consistent with observations. In this construction, kinetic terms are diagonal and under calculational control suppressing standard FCNC contributions. Proton decay operators of dimension 4, 5, 6 are suppressed, and gauge couplings can unify depending on the breaking scales from string scales at energies in the range 1012-1016 GeV, consistent with TeV soft-masses from moduli mediated supersymmetry breaking. The GUT scale model corresponds to D3 branes at dP 3 with two copies of the Pati-Salam gauge symmetry SU(4) × SU(2) R × SU(2) L . D-brane instantons generate a non-vanishing μ-term. Right handed sneutrinos can break the B - L symmetry and induce a see-saw mechanism of neutrino masses and R-parity violating operators with observable low-energy implications.
NASA Technical Reports Server (NTRS)
Diak, George R.; Smith, William L.
1993-01-01
The goals of this research endeavor have been to develop a flexible and relatively complete framework for the investigation of current and future satellite data sources in numerical meteorology. In order to realistically model how satellite information might be used for these purposes, it is necessary that Observing System Simulation Experiments (OSSEs) be as complete as possible. It is therefore desirable that these experiments simulate in entirety the sequence of steps involved in bringing satellite information from the radiance level through product retrieval to a realistic analysis and forecast sequence. In this project we have worked to make this sequence realistic by synthesizing raw satellite data from surrogate atmospheres, deriving satellite products from these data and subsequently producing analyses and forecasts using the retrieved products. The accomplishments made in 1991 are presented. The emphasis was on examining atmospheric soundings and microphysical products which we expect to produce with the launch of the Advanced Microwave Sounding Unit (AMSU), slated for flight in mid 1994.
Realistic Affective Forecasting: The Role of Personality
Hoerger, Michael; Chapman, Ben; Duberstein, Paul
2016-01-01
Affective forecasting often drives decision making. Although affective forecasting research has often focused on identifying sources of error at the event level, the present investigation draws upon the ‘realistic paradigm’ in seeking to identify factors that similarly influence predicted and actual emotions, explaining their concordance across individuals. We hypothesized that the personality traits neuroticism and extraversion would account for variation in both predicted and actual emotional reactions to a wide array of stimuli and events (football games, an election, Valentine’s Day, birthdays, happy/sad film clips, and an intrusive interview). As hypothesized, individuals who were more introverted and neurotic anticipated, correctly, that they would experience relatively more unpleasant emotional reactions, and those who were more extraverted and less neurotic anticipated, correctly, that they would experience relatively more pleasant emotional reactions. Personality explained 30% of the concordance between predicted and actual emotional reactions. Findings suggest three purported personality processes implicated in affective forecasting, highlight the importance of individual-differences research in this domain, and call for more research on realistic affective forecasts. PMID:26212463
Realistic affective forecasting: The role of personality.
Hoerger, Michael; Chapman, Ben; Duberstein, Paul
2016-11-01
Affective forecasting often drives decision-making. Although affective forecasting research has often focused on identifying sources of error at the event level, the present investigation draws upon the "realistic paradigm" in seeking to identify factors that similarly influence predicted and actual emotions, explaining their concordance across individuals. We hypothesised that the personality traits neuroticism and extraversion would account for variation in both predicted and actual emotional reactions to a wide array of stimuli and events (football games, an election, Valentine's Day, birthdays, happy/sad film clips, and an intrusive interview). As hypothesised, individuals who were more introverted and neurotic anticipated, correctly, that they would experience relatively more unpleasant emotional reactions, and those who were more extraverted and less neurotic anticipated, correctly, that they would experience relatively more pleasant emotional reactions. Personality explained 30% of the concordance between predicted and actual emotional reactions. Findings suggest three purported personality processes implicated in affective forecasting, highlight the importance of individual-differences research in this domain, and call for more research on realistic affective forecasts.
EGG: Empirical Galaxy Generator
NASA Astrophysics Data System (ADS)
Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; MichaÅowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.
2018-04-01
The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).
Environments for online maritime simulators with cloud computing capabilities
NASA Astrophysics Data System (ADS)
Raicu, Gabriel; Raicu, Alexandra
2016-12-01
This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.
Choosing the Right Integrator for Your Building Automation Project.
ERIC Educational Resources Information Center
Podgorski, Will
2002-01-01
Examines the prevailing definitions and responsibilities of product, network, and system integrators for building automation systems; offers a novel approach to system integration; and sets realistic expectations for the owner in terms of benefits, outcomes, and overall values. (EV)
NASA Astrophysics Data System (ADS)
Searle, Anthony; Petrachenko, Bill
2016-12-01
The VLBI Global Observing System (VGOS) has been designed to take advantage of advances in data recording speeds and storage capacity, allowing for smaller and faster antennas, wider bandwidths, and shorter observation durations. Here, schedules for a ``realistic" VGOS network, frequency sequences, and expanded source lists are presented using a new source-based scheduling algorithm. The VGOS aim for continuous observations presents new operational challenges. As the source-based strategy is independent of the observing network, there are operational advantages which allow for more flexible scheduling of continuous VLBI observations. Using VieVS, simulations of several schedules are presented and compared with previous VGOS studies.
Network Centric Warfare: A Realistic Defense Alternative for Smaller Nations?
2004-06-01
organic information sources. The degree to which force entities are networked will determine the quality of information that is available to various...control processes will determine the extent that information is shared, as well as the nature and quality of the interactions that occur between and...
2014-01-01
procedures were held constant). After the period of quiet rest, the finger pulse oximeter (MedSource International, Mound, MN) was applied to the left...temperature were then recorded with pulse oximeter (Medline Industries, Inc., Mundelein, IL). Following standard guide- lines (Pickering et al., 2005
SIMULATIONS OF AEROSOLS AND PHOTOCHEMICAL SPECIES WITH THE CMAQ PLUME-IN-GRID MODELING SYSTEM
A plume-in-grid (PinG) method has been an integral component of the CMAQ modeling system and has been designed in order to realistically simulate the relevant processes impacting pollutant concentrations in plumes released from major point sources. In particular, considerable di...
Disinfection of water decreases waterborne disease. Disinfection byproducts (DBPs) are formed by the reaction of oxidizing disinfectants with inorganic and organic materials in the source water. The U.S. EPA regulates five haloacetic acid (HAA) DBPs as a mixture. The objective ...
Fuzzification of continuous-value spatial evidence for mineral prospectivity mapping
NASA Astrophysics Data System (ADS)
Yousefi, Mahyar; Carranza, Emmanuel John M.
2015-01-01
Complexities of geological processes portrayed as certain feature in a map (e.g., faults) are natural sources of uncertainties in decision-making for exploration of mineral deposits. Besides natural sources of uncertainties, knowledge-driven (e.g., fuzzy logic) mineral prospectivity mapping (MPM) is also plagued and incurs further uncertainty in subjective judgment of analyst when there is no reliable proven value of evidential scores corresponding to relative importance of geological features that can directly be measured. In this regard, analysts apply expert opinion to assess relative importance of spatial evidences as meaningful decision support. This paper aims for fuzzification of continuous spatial data used as proxy evidence to facilitate and to support fuzzy MPM to generate exploration target areas for further examination of undiscovered deposits. In addition, this paper proposes to adapt the concept of expected value to further improve fuzzy logic MPM because the analysis of uncertain variables can be presented in terms of their expected value. The proposed modified expected value approach to MPM is not only a multi-criteria approach but it also treats uncertainty of geological processes a depicted by maps or spatial data in term of biased weighting more realistically in comparison with classified evidential maps because fuzzy membership scores are defined continuously whereby, for example, there is no need to categorize distances from evidential features to proximity classes using arbitrary intervals. The proposed continuous weighting approach and then integrating the weighted evidence layers by using modified expected value function, described in this paper can be used efficiently in either greenfields or brownfields.
Comparison of Phase-Based 3D Near-Field Source Localization Techniques for UHF RFID.
Parr, Andreas; Miesen, Robert; Vossiek, Martin
2016-06-25
In this paper, we present multiple techniques for phase-based narrowband backscatter tag localization in three-dimensional space with planar antenna arrays or synthetic apertures. Beamformer and MUSIC localization algorithms, known from near-field source localization and direction-of-arrival estimation, are applied to the 3D backscatter scenario and their performance in terms of localization accuracy is evaluated. We discuss the impact of different transceiver modes known from the literature, which evaluate different send and receive antenna path combinations for a single localization, as in multiple input multiple output (MIMO) systems. Furthermore, we propose a new Singledimensional-MIMO (S-MIMO) transceiver mode, which is especially suited for use with mobile robot systems. Monte-Carlo simulations based on a realistic multipath error model ensure spatial correlation of the simulated signals, and serve to critically appraise the accuracies of the different localization approaches. A synthetic uniform rectangular array created by a robotic arm is used to evaluate selected localization techniques. We use an Ultra High Frequency (UHF) Radiofrequency Identification (RFID) setup to compare measurements with the theory and simulation. The results show how a mean localization accuracy of less than 30 cm can be reached in an indoor environment. Further simulations demonstrate how the distance between aperture and tag affects the localization accuracy and how the size and grid spacing of the rectangular array need to be adapted to improve the localization accuracy down to orders of magnitude in the centimeter range, and to maximize array efficiency in terms of localization accuracy per number of elements.
He, Temple; Habib, Salman
2013-09-01
Simple dynamical systems--with a small number of degrees of freedom--can behave in a complex manner due to the presence of chaos. Such systems are most often (idealized) limiting cases of more realistic situations. Isolating a small number of dynamical degrees of freedom in a realistically coupled system generically yields reduced equations with terms that can have a stochastic interpretation. In situations where both noise and chaos can potentially exist, it is not immediately obvious how Lyapunov exponents, key to characterizing chaos, should be properly defined. In this paper, we show how to do this in a class of well-defined noise-driven dynamical systems, derived from an underlying Hamiltonian model.
A continuous family of realistic SUSY SU(5) GUTs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bajc, Borut, E-mail: borut.bajc@ijs.si
2016-06-21
It is shown that the minimal renormalizable supersymmetric SU(5) is still realistic providing the supersymmetric scale is at least few tens of TeV or large R-parity violating terms are considered. In the first case the vacuum is metastable, and different consistency constraints can give a bounded allowed region in the tan β − m{sub susy} plane. In the second case the mass eigenstate electron (down quark) is a linear combination of the original electron (down quark) and Higgsino (heavy colour triplet), and the mass ratio of bino and wino is determined. Both limits lead to light gravitino dark matter.
A Clinically Realistic Large Animal Model of Intra-Articular Fracture
2013-10-01
Model of Intra-Articular Fracture PRINCIPAL INVESTIGATOR: Jessica E. Goetz, Ph D CONTRACTING ORGANIZATION: The University of Iowa...5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jessica E. Goetz, Ph D 5d. PROJECT NUMBER 5e. TASK NUMBER E-Mail...short-term survival study investigating the effects of therapeutic treatment which was initiated during PY3 will be completed. 15. SUBJECT TERMS post
Unmixing Magnetic Hysteresis Loops
NASA Astrophysics Data System (ADS)
Heslop, D.; Roberts, A. P.
2012-04-01
Magnetic hysteresis loops provide important information in rock and environmental magnetic studies. Natural samples often contain an assemblage of magnetic particles composed of components with different origins. Each component potentially carries important environmental information. Hysteresis loops, however, provide information concerning the bulk magnetic assemblage, which makes it difficult to isolate the specific contributions from different sources. For complex mineral assemblages an unmixing strategy with which to separate hysteresis loops into their component parts is therefore essential. Previous methods to unmix hysteresis data have aimed at separating individual loops into their constituent parts using libraries of type-curves thought to correspond to specific mineral types. We demonstrate an alternative approach, which rather than decomposing a single loop into monomineralic contributions, examines a collection of loops to determine their constituent source materials. These source materials may themselves be mineral mixtures, but they provide a genetically meaningful decomposition of a magnetic assemblage in terms of the processes that controlled its formation. We show how an empirically derived hysteresis mixing space can be created, without resorting to type-curves, based on the co-variation within a collection of measured loops. Physically realistic end-members, which respect the expected behaviour and symmetries of hysteresis loops, can then be extracted from the mixing space. These end-members allow the measured loops to be described as a combination of invariant parts that are assumed to represent the different sources in the mixing model. Particular attention is paid to model selection and estimating the complexity of the mixing model, specifically, how many end-members should be included. We demonstrate application of this approach using lake sediments from Butte Valley, northern California. Our method successfully separates the hysteresis loops into sources with a variety of terrigenous and authigenic origins.
Realistic anomaly-mediated supersymmetry breaking
NASA Astrophysics Data System (ADS)
Chacko, Zacharia; Luty, Markus A.; Maksymyk, Ivan; Pontón, Eduardo
2000-03-01
We consider supersymmetry breaking communicated entirely by the superconformal anomaly in supergravity. This scenario is naturally realized if supersymmetry is broken in a hidden sector whose couplings to the observable sector are suppressed by more than powers of the Planck scale, as occurs if supersymmetry is broken in a parallel universe living in extra dimensions. This scenario is extremely predictive: soft supersymmetry breaking couplings are completely determined by anomalous dimensions in the effective theory at the weak scale. Gaugino and scalar masses are naturally of the same order, and flavor-changing neutral currents are automatically suppressed. The most glaring problem with this scenario is that slepton masses are negative in the minimal supersymmetric standard model. We point out that this problem can be simply solved by coupling extra Higgs doublets to the leptons. Lepton flavor-changing neutral currents can be naturally avoided by approximate symmetries. We also describe more speculative solutions involving compositeness near the weak scale. We then turn to electroweak symmetry breaking. Adding an explicit μ term gives a value for Bμ that is too large by a factor of ~ 100. We construct a realistic model in which the μ term arises from the vacuum expectation value of a singlet field, so all weak-scale masses are directly related to m3/2. We show that fully realistic electroweak symmetry breaking can occur in this model with moderate fine-tuning.
EOID Model Validation and Performance Prediction
2002-09-30
Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The two most prominent technologies in this area
ERIC Educational Resources Information Center
Kinnamon, Keneth
A national telephone survey indicated that audiences rated the television production of "Roots" positively in terms of the following: realistic portrayal of the people and the times; relevance for contemporary race relations; perceived emotional effect; and increased understanding of the psychology of black people. However, a comparison…
1988-06-07
barriers have been abolished, the franchising terms have been revised, a floating rate and increasingly realistic rate of exchange—though in my opinion...also be accorded permits for the establishment of private ice cream stands and ice cream parlors, snack and beer providers, fish stands, bakeries
Using a Magnetic Flux Transport Model to Predict the Solar Cycle
NASA Technical Reports Server (NTRS)
Lyatskaya, S.; Hathaway, D.; Winebarger, A.
2007-01-01
We present the results of an investigation into the use of a magnetic flux transport model to predict the amplitude of future solar cycles. Recently Dikpati, de Toma, & Gilman (2006) showed how their dynamo model could be used to accurately predict the amplitudes of the last eight solar cycles and offered a prediction for the next solar cycle - a large amplitude cycle. Cameron & Schussler (2007) found that they could reproduce this predictive skill with a simple 1-dimensional surface flux transport model - provided they used the same parameters and data as Dikpati, de Toma, & Gilman. However, when they tried incorporating the data in what they argued was a more realistic manner, they found that the predictive skill dropped dramatically. We have written our own code for examining this problem and have incorporated updated and corrected data for the source terms - the emergence of magnetic flux in active regions. We present both the model itself and our results from it - in particular our tests of its effectiveness at predicting solar cycles.
NASA Astrophysics Data System (ADS)
Roulet, Alexandre; Nimmrichter, Stefan; Arrazola, Juan Miguel; Seah, Stella; Scarani, Valerio
2017-06-01
The triumph of heat engines is their ability to convert the disordered energy of thermal sources into useful mechanical motion. In recent years, much effort has been devoted to generalizing thermodynamic notions to the quantum regime, partly motivated by the promise of surpassing classical heat engines. Here, we instead adopt a bottom-up approach: we propose a realistic autonomous heat engine that can serve as a test bed for quantum effects in the context of thermodynamics. Our model draws inspiration from actual piston engines and is built from closed-system Hamiltonians and weak bath coupling terms. We analytically derive the performance of the engine in the classical regime via a set of nonlinear Langevin equations. In the quantum case, we perform numerical simulations of the master equation. Finally, we perform a dynamic and thermodynamic analysis of the engine's behavior for several parameter regimes in both the classical and quantum case and find that the latter exhibits a consistently lower efficiency due to additional noise.
Invariant models in the inversion of gravity and magnetic fields and their derivatives
NASA Astrophysics Data System (ADS)
Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni
2014-11-01
In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.
Battaglia, Maurizio; Gottsmann, J.; Carbone, D.; Fernandez, J.
2008-01-01
Time-dependent gravimetric measurements can detect subsurface processes long before magma flow leads to earthquakes or other eruption precursors. The ability of gravity measurements to detect subsurface mass flow is greatly enhanced if gravity measurements are analyzed and modeled with ground-deformation data. Obtaining the maximum information from microgravity studies requires careful evaluation of the layout of network benchmarks, the gravity environmental signal, and the coupling between gravity changes and crustal deformation. When changes in the system under study are fast (hours to weeks), as in hydrothermal systems and restless volcanoes, continuous gravity observations at selected sites can help to capture many details of the dynamics of the intrusive sources. Despite the instrumental effects, mainly caused by atmospheric temperature, results from monitoring at Mt. Etna volcano show that continuous measurements are a powerful tool for monitoring and studying volcanoes.Several analytical and numerical mathematical models can beused to fit gravity and deformation data. Analytical models offer a closed-form description of the volcanic source. In principle, this allows one to readily infer the relative importance of the source parameters. In active volcanic sites such as Long Valley caldera (California, U.S.A.) and Campi Flegrei (Italy), careful use of analytical models and high-quality data sets has produced good results. However, the simplifications that make analytical models tractable might result in misleading volcanological inter-pretations, particularly when the real crust surrounding the source is far from the homogeneous/ isotropic assumption. Using numerical models allows consideration of more realistic descriptions of the sources and of the crust where they are located (e.g., vertical and lateral mechanical discontinuities, complex source geometries, and topography). Applications at Teide volcano (Tenerife) and Campi Flegrei demonstrate the importance of this more realistic description in gravity calculations. ?? 2008 Society of Exploration Geophysicists. All rights reserved.
Design of portable ultraminiature flow cytometers for medical diagnostics
NASA Astrophysics Data System (ADS)
Leary, James F.
2018-02-01
Design of portable microfluidic flow/image cytometry devices for measurements in the field (e.g. initial medical diagnostics) requires careful design in terms of power requirements and weight to allow for realistic portability. True portability with high-throughput microfluidic systems also requires sampling systems without the need for sheath hydrodynamic focusing both to avoid the need for sheath fluid and to enable higher volumes of actual sample, rather than sheath/sample combinations. Weight/power requirements dictate use of super-bright LEDs with top-hat excitation beam architectures and very small silicon photodiodes or nanophotonic sensors that can both be powered by small batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. Microfluidic cytometry also requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically in less than 15 minutes) initial medical decisions for patients in the field. This is not something conventional cytometry traditionally worries about, but is very important for development of small, portable microfluidic devices with small-volume throughputs. It also provides a more reasonable alternative to conventional tubes of blood when sampling geriatric and newborn patients for whom a conventional peripheral blood draw can be problematical. Instead one or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the doctor's office or field.
Design of point-of-care (POC) microfluidic medical diagnostic devices
NASA Astrophysics Data System (ADS)
Leary, James F.
2018-02-01
Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.
Diffusive deposition of aerosols in Phebus containment during FPT-2 test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontautas, A.; Urbonavicius, E.
2012-07-01
At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performedmore » in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)« less
Retrofitting LID Practices into Existing Neighborhoods: Is It Worth It?
NASA Astrophysics Data System (ADS)
Wright, Timothy J.; Liu, Yaoze; Carroll, Natalie J.; Ahiablame, Laurent M.; Engel, Bernard A.
2016-04-01
Low-impact development (LID) practices are gaining popularity as an approach to manage stormwater close to the source. LID practices reduce infrastructure requirements and help maintain hydrologic processes similar to predevelopment conditions. Studies have shown LID practices to be effective in reducing runoff and improving water quality. However, little has been done to aid decision makers in selecting the most effective practices for their needs and budgets. The long-term hydrologic impact assessment LID model was applied to four neighborhoods in Lafayette, Indiana using readily available data sources to compare LID practices by analyzing runoff volumes, implementation cost, and the approximate period needed to achieve payback on the investment. Depending on the LID practice and adoption level, 10-70 % reductions in runoff volumes could be achieved. The cost per cubic meter of runoff reduction was highly variable depending on the LID practice and the land use to which it was applied, ranging from around 3 to almost 600. In some cases the savings from reduced runoff volumes paid back the LID practice cost with interest in less than 3 years, while in other cases it was not possible to generate a payback. Decision makers need this information to establish realistic goals and make informed decisions regarding LID practices before moving into detailed designs, thereby saving time and resources.
Optimal Search for an Astrophysical Gravitational-Wave Background
NASA Astrophysics Data System (ADS)
Smith, Rory; Thrane, Eric
2018-04-01
Roughly every 2-10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals) for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both "safe" and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources), simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
Effective Management of a Volunteer Corps.
ERIC Educational Resources Information Center
Tedrick, Ted; And Others
1984-01-01
Prior to improving volunteer services, a leisure agency should have a realistic expectation of the desired accomplishments, both in individual and organizational terms. A systems approach to volunteer management is discussed that includes the components of planning and coordination; recruitment and orientation; training and placement; and…
EOID System Model Validation, Metrics, and Synthetic Clutter Generation
2003-09-30
Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The models will predict the impact of
Peritoneal injection is a major route for chemical introduction into fish for toxicological studies. This procedure, however, causes rapid exposure to toxicants at levels which aren't environmentally realistic. Long-term studies to determine effects of estrogenic chemicals on fis...
NASA Technical Reports Server (NTRS)
Baker, Donald J.
1989-01-01
Part of the results of a U.S. Army/NASA-Langley sponsored research program to establish the long term-term effects of realistic ground based exposure on advanced composite materials is presented. Residual strengths and moisture absorption as a function of exposure time and exposure location are reported for four different composite material systems that were exposed for five years on the North American Continent.
2014-07-01
useful to them before they applied to OCS, and 86% of candidates saying they would refer someone else to the RJP. 15. SUBJECT TERMS Officers...Candidate School,” or AccessOCS, used qualitative methods (Oliver, Ardison, Russell, & Babin, 2010) to (a) identify and describe OCS applicants in terms...job previews (RJPs) that would provide OCS applicants with useful information in a single, but comprehensive document to facilitate the accessioning
Exploring the Uncanny Valley to Find the Edge of Play
ERIC Educational Resources Information Center
Eberle, Scott G.
2009-01-01
Play often rewards us with a thrill or a sense of wonder. But, just over the edge of play, uncanny objects like dolls, automata, robots, and realistic animations may become monstrous rather than marvelous. Drawing from diverse sources, literary evidence, psychological and psychoanalytic theory, new insights in neuroscience, marketing literature,…
System Measures Thermal Noise In A Microphone
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J.; Ngo, Kim Chi T.
1994-01-01
Vacuum provides acoustic isolation from environment. System for measuring thermal noise of microphone and its preamplifier eliminates some sources of error found in older systems. Includes isolation vessel and exterior suspension, acting together, enables measurement of thermal noise under realistic conditions while providing superior vibrational and accoustical isolation. System yields more accurate measurements of thermal noise.
Renewable Energy Can Help Reduce Oil Dependency
Arvizu, Dan
2017-12-21
In a speech to the Economic Club of Kansas City on June 23, 2010, NREL Director Dan Arvizu takes a realistic look at how renewable energy can help reduce America's dependence on oil, pointing out that the country gets as much energy from renewable sources now as it does from offshore oil production.
Integration of Geodata in Documenting Castle Ruins
NASA Astrophysics Data System (ADS)
Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.
2016-06-01
Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.
Modeling Long-Term Fluvial Incision : Shall we Care for the Details of Short-Term Fluvial Dynamics?
NASA Astrophysics Data System (ADS)
Lague, D.; Davy, P.
2008-12-01
Fluvial incision laws used in numerical models of coupled climate, erosion and tectonics systems are mainly based on the family of stream power laws for which the rate of local erosion E is a power function of the topographic slope S and the local mean discharge Q : E = K Qm Sn. The exponents m and n are generally taken as (0.35, 0.7) or (0.5, 1), and K is chosen such that the predicted topographic elevation given the prevailing rates of precipitation and tectonics stay within realistic values. The resulting topographies are reasonably realistic, and the coupled system dynamics behaves somehow as expected : more precipitation induces increased erosion and localization of the deformation. Yet, if we now focus on smaller scale fluvial dynamics (the reach scale), recent advances have suggested that discharge variability, channel width dynamics or sediment flux effects may play a significant role in controlling incision rates. These are not factored in the simple stream power law model. In this work, we study how these short- term details propagate into long-term incision dynamics within the framework of surface/tectonics coupled numerical models. To upscale the short term dynamics to geological timescales, we use a numerical model of a trapezoidal river in which vertical and lateral incision processes are computed from fluid shear stress at a daily timescale, sediment transport and protection effects are factored in, as well as a variable discharge. We show that the stream power law model might still be a valid model but that as soon as realistic effects are included such as a threshold for sediment transport, variable discharge and dynamic width the resulting exponents m and n can be as high as 2 and 4. This high non-linearity has a profound consequence on the sensitivity of fluvial relief to incision rate. We also show that additional complexity does not systematically translates into more non-linear behaviour. For instance, considering only a dynamical width without discharge variability does not induce a significant difference in the predicted long-term incision law and scaling of relief with incision rate at steady-state. We conclude that the simple stream power law models currently in use are false, and that details of short-term fluvial dynamics must make their way into long-term evolution models to avoid oversimplifying the coupled dynamics between erosion, tectonics and climate.
Numerical study of the effect of earth tides on recurring short-term slow slip events
NASA Astrophysics Data System (ADS)
Matsuzawa, T.; Tanaka, Y.; Shibazaki, B.
2017-12-01
Short-term slow slip events (SSEs) in the Nankai region are affected by earth tides (e.g., Nakata et al., 2008; Ide and Tanaka, 2014; Yabe et al., 2015). The effect of tidal stress on the SSEs is also examined numerically (e.g., Hawthorne and Rubin, 2013). In our previous study (Matsuzawa et al., 2017, JpGU-AGU), we numerically simulated SSEs in the Shikoku region, and reported that tidal stress makes the variance of recurrence intervals of SSEs smaller in relatively isolated SSE regions. However, the reason of such stable recurrence was not clear. In this study, we examine the tidal effect on short-term SSEs based on a flat plate and a realistic plate model (e.g., Matsuzawa et al., 2013, GRL). We adopt a rate- and state-dependent friction law (RS-law) with cutoff velocities as in our previous studies (Matsuzawa et al., 2013). We assume that (a-b) value in the RS-law is negative within the short-term SSE region, and positive outside the region. In a flat plate model, the short-term SSE region is a circular patch with the radius of 6 km. In a realistic plate model, the short-term SSE region is based on the actual distribution of low-frequency tremor. Low effective normal stress is assumed at the depth of SSEs. Calculating stress change by earth tides as in Yabe et al., (2015), we examine the stress perturbation by two different earth tides with the period of semidiurnal (M2) and fortnight (Mf) tide in this study. In the result of a flat plate case, amplitude of SSEs becomes smaller just after the slip at whole simulated area. Recurring SSEs become clear again within one year in the case with tides (M2 or Mf), while the recurrence becomes clear after seven years in the case without tides. Interestingly, the effect of the Mf tide is similar to the case with the M2 tide, even though the amplitude of the Mf tide (0.01 kPa) is two-order smaller than that of the M2 tide. In the realistic plate model of Shikoku, clear recurrence of short-term SSEs is found earlier than the case without tides, after the occurrence of long-term SSEs. These results suggest that stress perturbation by earth tides makes SSEs more episodic even in the situation that the loading in the surrounding area tends to cause temporal stable sliding.
NASA Astrophysics Data System (ADS)
Holmstrup, Martin; Damgaard, Christian; Schmidt, Inger K.; Arndal, Marie F.; Beier, Claus; Mikkelsen, Teis N.; Ambus, Per; Larsen, Klaus S.; Pilegaard, Kim; Michelsen, Anders; Andresen, Louise C.; Haugwitz, Merian; Bergmark, Lasse; Priemé, Anders; Zaitsev, Andrey S.; Georgieva, Slavka; Dam, Marie; Vestergård, Mette; Christensen, Søren
2017-01-01
In a dry heathland ecosystem we manipulated temperature (warming), precipitation (drought) and atmospheric concentration of CO2 in a full-factorial experiment in order to investigate changes in below-ground biodiversity as a result of future climate change. We investigated the responses in community diversity of nematodes, enchytraeids, collembolans and oribatid mites at two and eight years of manipulations. We used a structural equation modelling (SEM) approach analyzing the three manipulations, soil moisture and temperature, and seven soil biological and chemical variables. The analysis revealed a persistent and positive effect of elevated CO2 on litter C:N ratio. After two years of treatment, the fungi to bacteria ratio was increased by warming, and the diversities within oribatid mites, collembolans and nematode groups were all affected by elevated CO2 mediated through increased litter C:N ratio. After eight years of treatment, however, the CO2-increased litter C:N ratio did not influence the diversity in any of the four fauna groups. The number of significant correlations between treatments, food source quality, and soil biota diversities was reduced from six to three after two and eight years, respectively. These results suggest a remarkable resilience within the soil biota against global climate change treatments in the long term.
Holmstrup, Martin; Damgaard, Christian; Schmidt, Inger K.; Arndal, Marie F.; Beier, Claus; Mikkelsen, Teis N.; Ambus, Per; Larsen, Klaus S.; Pilegaard, Kim; Michelsen, Anders; Andresen, Louise C.; Haugwitz, Merian; Bergmark, Lasse; Priemé, Anders; Zaitsev, Andrey S.; Georgieva, Slavka; Dam, Marie; Vestergård, Mette; Christensen, Søren
2017-01-01
In a dry heathland ecosystem we manipulated temperature (warming), precipitation (drought) and atmospheric concentration of CO2 in a full-factorial experiment in order to investigate changes in below-ground biodiversity as a result of future climate change. We investigated the responses in community diversity of nematodes, enchytraeids, collembolans and oribatid mites at two and eight years of manipulations. We used a structural equation modelling (SEM) approach analyzing the three manipulations, soil moisture and temperature, and seven soil biological and chemical variables. The analysis revealed a persistent and positive effect of elevated CO2 on litter C:N ratio. After two years of treatment, the fungi to bacteria ratio was increased by warming, and the diversities within oribatid mites, collembolans and nematode groups were all affected by elevated CO2 mediated through increased litter C:N ratio. After eight years of treatment, however, the CO2-increased litter C:N ratio did not influence the diversity in any of the four fauna groups. The number of significant correlations between treatments, food source quality, and soil biota diversities was reduced from six to three after two and eight years, respectively. These results suggest a remarkable resilience within the soil biota against global climate change treatments in the long term. PMID:28120893
Holmstrup, Martin; Damgaard, Christian; Schmidt, Inger K; Arndal, Marie F; Beier, Claus; Mikkelsen, Teis N; Ambus, Per; Larsen, Klaus S; Pilegaard, Kim; Michelsen, Anders; Andresen, Louise C; Haugwitz, Merian; Bergmark, Lasse; Priemé, Anders; Zaitsev, Andrey S; Georgieva, Slavka; Dam, Marie; Vestergård, Mette; Christensen, Søren
2017-01-25
In a dry heathland ecosystem we manipulated temperature (warming), precipitation (drought) and atmospheric concentration of CO 2 in a full-factorial experiment in order to investigate changes in below-ground biodiversity as a result of future climate change. We investigated the responses in community diversity of nematodes, enchytraeids, collembolans and oribatid mites at two and eight years of manipulations. We used a structural equation modelling (SEM) approach analyzing the three manipulations, soil moisture and temperature, and seven soil biological and chemical variables. The analysis revealed a persistent and positive effect of elevated CO 2 on litter C:N ratio. After two years of treatment, the fungi to bacteria ratio was increased by warming, and the diversities within oribatid mites, collembolans and nematode groups were all affected by elevated CO 2 mediated through increased litter C:N ratio. After eight years of treatment, however, the CO 2 -increased litter C:N ratio did not influence the diversity in any of the four fauna groups. The number of significant correlations between treatments, food source quality, and soil biota diversities was reduced from six to three after two and eight years, respectively. These results suggest a remarkable resilience within the soil biota against global climate change treatments in the long term.
Development of an epiphyte indicator of nutrient enrichment ...
Metrics of epiphyte load on macrophytes were evaluated for use as quantitative biological indicators for nutrient impacts in estuarine waters, based on review and analysis of the literature on epiphytes and macrophytes, primarily seagrasses, but including some brackish and freshwater rooted macrophyte species. An approach is presented that empirically derives threshold epiphyte loads which are likely to cause specified levels of decrease in macrophyte response metrics such as biomass, shoot density, percent cover, production and growth. Data from 36 studies of 10 macrophyte species were pooled to derive relationships between epiphyte load and -25 and -50% seagrass response levels, which are proposed as the primary basis for establishment of critical threshold values. Given multiple sources of variability in the response data, threshold ranges based on the range of values falling between the median and the 75th quantiles of observations at a given seagrass response level are proposed rather than single, critical point values. Four epiphyte load threshold categories - low, moderate, high, very high, are proposed. Comparison of values of epiphyte loads associated with 25 and 50% reductions in light to macrophytes suggest that the threshold ranges are realistic both in terms of the principle mechanism of impact to macrophytes and in terms of the magnitude of resultant impacts expressed by the macrophytes. Some variability in response levels was observed among
AEDT: A new concept for ecological dynamics in the ever-changing world.
Chesson, Peter
2017-05-01
The important concept of equilibrium has always been controversial in ecology, but a new, more general concept, an asymptotic environmentally determined trajectory (AEDT), overcomes many concerns with equilibrium by realistically incorporating long-term climate change while retaining much of the predictive power of a stable equilibrium. A population or ecological community is predicted to approach its AEDT, which is a function of time reflecting environmental history and biology. The AEDT invokes familiar questions and predictions but in a more realistic context in which consideration of past environments and a future changing profoundly due to human influence becomes possible. Strong applications are also predicted in population genetics, evolution, earth sciences, and economics.
NASA Astrophysics Data System (ADS)
Ingley, Spencer J.; Rahmani Asl, Mohammad; Wu, Chengde; Cui, Rongfeng; Gadelhak, Mahmoud; Li, Wen; Zhang, Ji; Simpson, Jon; Hash, Chelsea; Butkowski, Trisha; Veen, Thor; Johnson, Jerald B.; Yan, Wei; Rosenthal, Gil G.
2015-12-01
Experimental approaches to studying behaviors based on visual signals are ubiquitous, yet these studies are limited by the difficulty of combining realistic models with the manipulation of signals in isolation. Computer animations are a promising way to break this trade-off. However, animations are often prohibitively expensive and difficult to program, thus limiting their utility in behavioral research. We present anyFish 2.0, a user-friendly platform for creating realistic animated 3D fish. anyFish 2.0 dramatically expands anyFish's utility by allowing users to create animations of members of several groups of fish from model systems in ecology and evolution (e.g., sticklebacks, Poeciliids, and zebrafish). The visual appearance and behaviors of the model can easily be modified. We have added several features that facilitate more rapid creation of realistic behavioral sequences. anyFish 2.0 provides a powerful tool that will be of broad use in animal behavior and evolution and serves as a model for transparency, repeatability, and collaboration.
NASA Astrophysics Data System (ADS)
Gao, Siwen; Rajendran, Mohan Kumar; Fivel, Marc; Ma, Anxin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo
2015-10-01
Three-dimensional discrete dislocation dynamics (DDD) simulations in combination with the phase-field method are performed to investigate the influence of different realistic Ni-base single crystal superalloy microstructures with the same volume fraction of {γ\\prime} precipitates on plastic deformation at room temperature. The phase-field method is used to generate realistic microstructures as the boundary conditions for DDD simulations in which a constant high uniaxial tensile load is applied along different crystallographic directions. In addition, the lattice mismatch between the γ and {γ\\prime} phases is taken into account as a source of internal stresses. Due to the high antiphase boundary energy and the rare formation of superdislocations, precipitate cutting is not observed in the present simulations. Therefore, the plastic deformation is mainly caused by dislocation motion in γ matrix channels. From a comparison of the macroscopic mechanical response and the dislocation evolution for different microstructures in each loading direction, we found that, for a given {γ\\prime} phase volume fraction, the optimal microstructure should possess narrow and homogeneous γ matrix channels.
ERIC Educational Resources Information Center
Fay, Temple H.
2012-01-01
Quadratic friction involves a discontinuous damping term in equations of motion in order that the frictional force always opposes the direction of the motion. Perhaps for this reason this topic is usually omitted from beginning texts in differential equations and physics. However, quadratic damping is more realistic than viscous damping in many…
Peritoneal injection is a major route for chemical introduction into fish for toxicological studies. This procedure, however, causes rapid exposure to toxicants at levels which aren't environmentally realistic. Long-term studies to determine effects of estrogenic chemicals on fis...
Dynamics at Intermediate Time Scales and Management of Ecological Populations
2017-05-10
thinking about the importance of transients is to recognize the importance of serial autocorrelation in time of forcing terms over realistic ecological time...rich areas helps produce divergent home range responses bet - ween individuals from difference age classes. This model has broad applications for
Depigmented skin and phantom color measurements for realistic prostheses.
Tanner, Paul; Leachman, Sancy; Boucher, Kenneth; Ozçelik, Tunçer Burak
2014-02-01
The purpose of this study was to test the hypothesis that regardless of human skin phototype, areas of depigmented skin, as seen in vitiligo, are optically indistinguishable among skin phototypes. The average of the depigmented skin measurements can be used to develop the base color of realistic prostheses. Data was analyzed from 20 of 32 recruited vitiligo study participants. Diffuse reflectance spectroscopy measurements were made from depigmented skin and adjacent pigmented skin, then compared with 66 pigmented polydimethylsiloxane phantoms to determine pigment concentrations in turbid media for making realistic facial prostheses. The Area Under spectral intensity Curve (AUC) was calculated for average spectroscopy measurements of pigmented sites in relation to skin phototype (P = 0.0505) and depigmented skin in relation to skin phototype (P = 0.59). No significant relationship exists between skin phototypes and depigmented skin spectroscopy measurements. The average of the depigmented skin measurements (AUC 19,129) was the closest match to phantom 6.4 (AUC 19,162). Areas of depigmented skin are visibly indistinguishable per skin phototype, yet spectrometry shows that depigmented skin measurements varied and were unrelated to skin phototype. Possible sources of optical variation of depigmented skin include age, body site, blood flow, quantity/quality of collagen, and other chromophores. The average of all depigmented skin measurements can be used to derive the pigment composition and concentration for realistic facial prostheses. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Powerful model for the point source sky: Far-ultraviolet and enhanced midinfrared performance
NASA Technical Reports Server (NTRS)
Cohen, Martin
1994-01-01
I report further developments of the Wainscoat et al. (1992) model originally created for the point source infrared sky. The already detailed and realistic representation of the Galaxy (disk, spiral arms and local spur, molecular ring, bulge, spheroid) has been improved, guided by CO surveys of local molecular clouds, and by the inclusion of a component to represent Gould's Belt. The newest version of the model is very well validated by Infrared Astronomy Satellite (IRAS) source counts. A major new aspect is the extension of the same model down to the far ultraviolet. I compare predicted and observed far-utraviolet source counts from the Apollo 16 'S201' experiment (1400 A) and the TD1 satellite (for the 1565 A band).
The foodscape: classification and field validation of secondary data sources.
Lake, Amelia A; Burgoine, Thomas; Greenhalgh, Fiona; Stamp, Elaine; Tyrrell, Rachel
2010-07-01
The aims were to: develop a food environment classification tool and to test the acceptability and validity of three secondary sources of food environment data within a defined urban area of Newcastle-Upon-Tyne, using a field validation method. A 21 point (with 77 sub-categories) classification tool was developed. The fieldwork recorded 617 establishments selling food and/or food products. The sensitivity analysis of the secondary sources against fieldwork for the Newcastle City Council data was good (83.6%), while Yell.com and the Yellow Pages were low (51.2% and 50.9%, respectively). To improve the quality of secondary data, multiple sources should be used in order to achieve a realistic picture of the foodscape. 2010 Elsevier Ltd. All rights reserved.
An infrared sky model based on the IRAS point source data
NASA Technical Reports Server (NTRS)
Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah
1990-01-01
A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.
Global and Regional Impacts of HONO on the Chemical Composition of Clouds and Aerosols
NASA Technical Reports Server (NTRS)
Elshorbany, Y. F.; Crutzen, P. J.; Steil, B.; Pozzer, A.; Tost, H.; Lelieveld, J.
2014-01-01
Recently, realistic simulation of nitrous acid (HONO) based on the HONO / NOx ratio of 0.02 was found to have a significant impact on the global budgets of HOx (OH + HO2) and gas phase oxidation products in polluted regions, especially in winter when other photolytic sources are of minor importance. It has been reported that chemistry-transport models underestimate sulphate concentrations, mostly during winter. Here we show that simulating realistic HONO levels can significantly enhance aerosol sulphate (S(VI)) due to the increased formation of H2SO4. Even though in-cloud aqueous phase oxidation of dissolved SO2 (S(IV)) is the main source of S(VI), it appears that HONO related enhancement of H2O2 does not significantly affect sulphate because of the predominantly S(IV) limited conditions, except over eastern Asia. Nitrate is also increased via enhanced gaseous HNO3 formation and N2O5 hydrolysis on aerosol particles. Ammonium nitrate is enhanced in ammonia-rich regions but not under ammonia-limited conditions. Furthermore, particle number concentrations are also higher, accompanied by the transfer from hydrophobic to hydrophilic aerosol modes. This implies a significant impact on the particle lifetime and cloud nucleating properties. The HONO induced enhancements of all species studied are relatively strong in winter though negligible in summer. Simulating realistic HONO levels is found to improve the model measurement agreement of sulphate aerosols, most apparent over the US. Our results underscore the importance of HONO for the atmospheric oxidizing capacity and corroborate the central role of cloud chemical processing in S(IV) formation
A Hierarchy of Transport Approximations for High Energy Heavy (HZE) Ions
NASA Technical Reports Server (NTRS)
Wilson, John W.; Lamkin, Stanley L.; Hamidullah, Farhat; Ganapol, Barry D.; Townsend, Lawrence W.
1989-01-01
The transport of high energy heavy (HZE) ions through bulk materials is studied neglecting energy dependence of the nuclear cross sections. A three term perturbation expansion appears to be adequate for most practical applications for which penetration depths are less than 30 g per sq cm of material. The differential energy flux is found for monoenergetic beams and for realistic ion beam spectral distributions. An approximate formalism is given to estimate higher-order terms.
Management of long term sickness absence: a systematic realist review.
Higgins, Angela; O'Halloran, Peter; Porter, Sam
2012-09-01
The increasing impact and costs of long term sickness absence have been well documented. However, the diversity and complexity of interventions and of the contexts in which these take place makes a traditional review problematic. Therefore, we undertook a systematic realist review to identify the dominant programme theories underlying best practice, to assess the evidence for these theories, and to throw light on important enabling or disabling contextual factors. A search of the scholarly literature from 1950 to 2011 identified 5,576 articles, of which 269 formed the basis of the review. We found that the dominant programme theories in relation to effective management related to: early intervention or referral by employers; having proactive organisational procedures; good communication and cooperation between stakeholders; and workplace-based occupational rehabilitation. Significant contextual factors were identified as the level of support for interventions from top management, the size and structure of the organisation, the level of financial and organisational investment in the management of long-term sickness absence, and the quality of relationships between managers and staff. Consequently, those with responsibility for managing absence should bear in mind the contextual factors that are likely to have an impact on interventions, and do what they can to ensure stakeholders have at least a mutual understanding (if not a common purpose) in relation to their perceptions of interventions, goals, culture and practice in the management of long term sickness absence.
Meads, C; Nyssen, O P; Wong, G; Steed, L; Bourke, L; Ross, C A; Hayman, S; Field, V; Lord, J; Greenhalgh, T; Taylor, S J C
2014-01-01
Introduction Long-term medical conditions (LTCs) cause reduced health-related quality of life and considerable health service expenditure. Writing therapy has potential to improve physical and mental health in people with LTCs, but its effectiveness is not established. This project aims to establish the clinical and cost-effectiveness of therapeutic writing in LTCs by systematic review and economic evaluation, and to evaluate context and mechanisms by which it might work, through realist synthesis. Methods Included are any comparative study of therapeutic writing compared with no writing, waiting list, attention control or placebo writing in patients with any diagnosed LTCs that report at least one of the following: relevant clinical outcomes; quality of life; health service use; psychological, behavioural or social functioning; adherence or adverse events. Searches will be conducted in the main medical databases including MEDLINE, EMBASE, PsycINFO, The Cochrane Library and Science Citation Index. For the realist review, further purposive and iterative searches through snowballing techniques will be undertaken. Inclusions, data extraction and quality assessment will be in duplicate with disagreements resolved through discussion. Quality assessment will include using Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria. Data synthesis will be narrative and tabular with meta-analysis where appropriate. De novo economic modelling will be attempted in one clinical area if sufficient evidence is available and performed according to the National Institute for Health and Care Excellence (NICE) reference case. PMID:24549165
Realistic thermodynamic and statistical-mechanical measures for neural synchronization.
Kim, Sang-Yoon; Lim, Woochang
2014-04-15
Synchronized brain rhythms, associated with diverse cognitive functions, have been observed in electrical recordings of brain activity. Neural synchronization may be well described by using the population-averaged global potential VG in computational neuroscience. The time-averaged fluctuation of VG plays the role of a "thermodynamic" order parameter O used for describing the synchrony-asynchrony transition in neural systems. Population spike synchronization may be well visualized in the raster plot of neural spikes. The degree of neural synchronization seen in the raster plot is well measured in terms of a "statistical-mechanical" spike-based measure Ms introduced by considering the occupation and the pacing patterns of spikes. The global potential VG is also used to give a reference global cycle for the calculation of Ms. Hence, VG becomes an important collective quantity because it is associated with calculation of both O and Ms. However, it is practically difficult to directly get VG in real experiments. To overcome this difficulty, instead of VG, we employ the instantaneous population spike rate (IPSR) which can be obtained in experiments, and develop realistic thermodynamic and statistical-mechanical measures, based on IPSR, to make practical characterization of the neural synchronization in both computational and experimental neuroscience. Particularly, more accurate characterization of weak sparse spike synchronization can be achieved in terms of realistic statistical-mechanical IPSR-based measure, in comparison with the conventional measure based on VG. Copyright © 2014. Published by Elsevier B.V.
Meads, C; Nyssen, O P; Wong, G; Steed, L; Bourke, L; Ross, C A; Hayman, S; Field, V; Lord, J; Greenhalgh, T; Taylor, S J C
2014-02-18
Long-term medical conditions (LTCs) cause reduced health-related quality of life and considerable health service expenditure. Writing therapy has potential to improve physical and mental health in people with LTCs, but its effectiveness is not established. This project aims to establish the clinical and cost-effectiveness of therapeutic writing in LTCs by systematic review and economic evaluation, and to evaluate context and mechanisms by which it might work, through realist synthesis. Included are any comparative study of therapeutic writing compared with no writing, waiting list, attention control or placebo writing in patients with any diagnosed LTCs that report at least one of the following: relevant clinical outcomes; quality of life; health service use; psychological, behavioural or social functioning; adherence or adverse events. Searches will be conducted in the main medical databases including MEDLINE, EMBASE, PsycINFO, The Cochrane Library and Science Citation Index. For the realist review, further purposive and iterative searches through snowballing techniques will be undertaken. Inclusions, data extraction and quality assessment will be in duplicate with disagreements resolved through discussion. Quality assessment will include using Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria. Data synthesis will be narrative and tabular with meta-analysis where appropriate. De novo economic modelling will be attempted in one clinical area if sufficient evidence is available and performed according to the National Institute for Health and Care Excellence (NICE) reference case.
Kreiner, A J; Baldo, M; Bergueiro, J R; Cartelli, D; Castell, W; Thatar Vento, V; Gomez Asoia, J; Mercuri, D; Padulo, J; Suarez Sandin, J C; Erhardt, J; Kesque, J M; Valda, A A; Debray, M E; Somacal, H R; Igarzabal, M; Minsky, D M; Herrera, M S; Capoulat, M E; Gonzalez, S J; del Grosso, M F; Gagetti, L; Suarez Anzorena, M; Gun, M; Carranza, O
2014-06-01
The activity in accelerator development for accelerator-based BNCT (AB-BNCT) both worldwide and in Argentina is described. Projects in Russia, UK, Italy, Japan, Israel, and Argentina to develop AB-BNCT around different types of accelerators are briefly presented. In particular, the present status and recent progress of the Argentine project will be reviewed. The topics will cover: intense ion sources, accelerator tubes, transport of intense beams, beam diagnostics, the (9)Be(d,n) reaction as a possible neutron source, Beam Shaping Assemblies (BSA), a treatment room, and treatment planning in realistic cases. © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Dormans, Joris
2011-01-01
Realism remains a prominent topic in game design and industry research; yet, a strong academic case can be made that games are anything, but realistic. This article frames realism in games in semiotic terms as iconic simulation and argues that games can gain expressiveness when they move beyond the current focus on iconic simulation. In parallel…
Agencies within communities, communities within ecosystems
Jane Kapler Smith; Kerry McMenus
2000-01-01
Can scientific information and intensive, extensive public involvement through facilitated meetings be expected to lead to agreement on natural resource issues? Communications and research in the Bitterroot Ecosystem Management Research Project indicate that, where peopleâs values differ greatly, consensus is not a realistic goal for short term planning processes....
USDA-ARS?s Scientific Manuscript database
Increasing urbanization changes runoff patterns to be flashy and instantaneous with decreased base flow. A model with the ability to simulate sub-daily rainfall–runoff processes and continuous simulation capability is required to realistically capture the long-term flow and water quality trends in w...
SALMON RESTORATION: FORMULATING GOALS WITHIN A REALISTIC SCIENCE AND POLICY CONTEXT
Throughout the Pacific Northwest, since 1850, all wild salmon runs have declined and some have disappeared. Billions of dollars have been spent in a so-far failed attempt to reverse the long-term decline. Each year, hundreds of millions of dollars continue to be spent in variou...
Mapping algorithm for freeform construction using non-ideal light sources
NASA Astrophysics Data System (ADS)
Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.
2015-09-01
Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.
Cortical sources of ERP in prosaccade and antisaccade eye movements using realistic source models
Richards, John E.
2013-01-01
The cortical sources of event-related-potentials (ERP) using realistic source models were examined in a prosaccade and antisaccade procedure. College-age participants were presented with a preparatory interval and a target that indicated the direction of the eye movement that was to be made. In some blocks a cue was given in the peripheral location where the target was to be presented and in other blocks no cue was given. In Experiment 1 the prosaccade and antisaccade trials were presented randomly within a block; in Experiment 2 procedures were compared in which either prosaccade and antisaccade trials were mixed in the same block, or trials were presented in separate blocks with only one type of eye movement. There was a central negative slow wave occurring prior to the target, a slow positive wave over the parietal scalp prior to the saccade, and a parietal spike potential immediately prior to saccade onset. Cortical source analysis of these ERP components showed a common set of sources in the ventral anterior cingulate and orbital frontal gyrus for the presaccadic positive slow wave and the spike potential. In Experiment 2 the same cued- and non-cued blocks were used, but prosaccade and antisaccade trials were presented in separate blocks. This resulted in a smaller difference in reaction time between prosaccade and antisaccade trials. Unlike the first experiment, the central negative slow wave was larger on antisaccade than on prosaccade trials, and this effect on the ERP component had its cortical source primarily in the parietal and mid-central cortical areas contralateral to the direction of the eye movement. These results suggest that blocked prosaccade and antisaccade trials results in preparatory or set effects that decreases reaction time, eliminates some cueing effects, and is based on contralateral parietal-central brain areas. PMID:23847476
Training the EFL Teacher--An Illustrated Commentary.
ERIC Educational Resources Information Center
Rees, Alun L. W.
1970-01-01
Despite current interest in the field of teaching English as a foreign language, there is still cause for dissatisfaction with the training of EFL teachers, both in Britain and abroad. The author presents, in the form of a "duologue," some pertinent views from a variety of sources, and stresses the need for a more realistic approach to…
Woody biomass outreach in the southern United States: A case study
Martha Monroe; Annie Oxarart
2011-01-01
Woody biomass is one potential renewable energy source that is technically feasible where environmental and economic factors are promising. It becomes a realistic option when it is also socially acceptable. Public acceptance and support of wood to energy proposals require community education and outreach. The Wood to Energy Outreach Program provides science-based...
Make Your Own Paint Chart: A Realistic Context for Developing Proportional Reasoning with Ratios
ERIC Educational Resources Information Center
Beswick, Kim
2011-01-01
Proportional reasoning has been recognised as a crucial focus of mathematics in the middle years and also as a frequent source of difficulty for students (Lamon, 2007). Proportional reasoning concerns the equivalence of pairs of quantities that are related multiplicatively; that is, equivalent ratios including those expressed as fractions and…
The Foreign Language Teacher as "Con Artist."
ERIC Educational Resources Information Center
Hughes, Jean S.
A teacher's experiences in acquiring realia (mostly food-related) for her junior high school French classes are described. The collection of realistic props proved to be both a small adventure in itself and the source of a rewarding change in classroom instruction. The use of a simulated store, in which students bought and sold the imitation food,…
Statistical techniques for sampling and monitoring natural resources
Hans T. Schreuder; Richard Ernst; Hugo Ramirez-Maldonado
2004-01-01
We present the statistical theory of inventory and monitoring from a probabilistic point of view. We start with the basics and show the interrelationships between designs and estimators illustrating the methods with a small artificial population as well as with a mapped realistic population. For such applications, useful open source software is given in Appendix 4....
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA
NASA Astrophysics Data System (ADS)
Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.
2017-12-01
Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA
Model-based Bayesian signal extraction algorithm for peripheral nerves
NASA Astrophysics Data System (ADS)
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.
NASA Astrophysics Data System (ADS)
Huang, B.; Thorne, P.; Banzon, P. V. F.; Chepurin, G. A.; Lawrimore, J. H.; Menne, M. J.; Vose, R. S.; Smith, T. M.; Zhang, H. M.
2017-12-01
The monthly global 2°×2° Extended Reconstructed Sea Surface Temperature (ERSST) has been revised and updated from version 4 to version 5. This update incorporates a new release of ICOADS R3.0, a decade of near-surface data from Argo floats, and a new estimate of centennial sea-ice from HadISST2. A number of choices in aspects of quality control, bias adjustment and interpolation have been substantively revised. The resulting ERSST estimates have more realistic spatio-temporal variations, better representation of high latitude SSTs, and ship SST biases are now calculated relative to more accurate buoy measurements, while the global long-term trend remains about the same. Progressive experiments have been undertaken to highlight the effects of each change in data source and analysis technique upon the final product. The reconstructed SST is systematically decreased by 0.077°C, as the reference data source is switched from ship SST in v4 to modern buoy SST in v5. Furthermore, high latitude SSTs are decreased by 0.1°-0.2°C by using sea-ice concentration from HadISST2 over HadISST1. Changes arising from remaining innovations are mostly important at small space and time scales, primarily having an impact where and when input observations are sparse. Cross-validations and verifications with independent modern observations show that the updates incorporated in ERSSTv5 have improved the representation of spatial variability over the global oceans, the magnitude of El Niño and La Niña events, and the decadal nature of SST changes over 1930s-40s when observation instruments changed rapidly. Both long (1900-2015) and short (2000-2015) term SST trends in ERSSTv5 remain significant as in ERSSTv4.
Vacuum stress energy density and its gravitational implications
NASA Astrophysics Data System (ADS)
Estrada, Ricardo; Fulling, Stephen A.; Kaplan, Lev; Kirsten, Klaus; Liu, Zhonghai; Milton, Kimball A.
2008-04-01
In nongravitational physics the local density of energy is often regarded as merely a bookkeeping device; only total energy has an experimental meaning—and it is only modulo a constant term. But in general relativity the local stress-energy tensor is the source term in Einstein's equation. In closed universes, and those with Kaluza-Klein dimensions, theoretical consistency demands that quantum vacuum energy should exist and have gravitational effects, although there are no boundary materials giving rise to that energy by van der Waals interactions. In the lab there are boundaries, and in general the energy density has a nonintegrable singularity as a boundary is approached (for idealized boundary conditions). As pointed out long ago by Candelas and Deutsch, in this situation there is doubt about the viability of the semiclassical Einstein equation. Our goal is to show that the divergences in the linearized Einstein equation can be renormalized to yield a plausible approximation to the finite theory that presumably exists for realistic boundary conditions. For a scalar field with Dirichlet or Neumann boundary conditions inside a rectangular parallelepiped, we have calculated by the method of images all components of the stress tensor, for all values of the conformal coupling parameter and an exponential ultraviolet cutoff parameter. The qualitative features of contributions from various classes of closed classical paths are noted. Then the Estrada-Kanwal distributional theory of asymptotics, particularly the moment expansion, is used to show that the linearized Einstein equation with the stress-energy near a plane boundary as source converges to a consistent theory when the cutoff is removed. This paper reports work in progress on a project combining researchers in Texas, Louisiana and Oklahoma. It is supported by NSF Grants PHY-0554849 and PHY-0554926.
Cable equation for general geometry
NASA Astrophysics Data System (ADS)
López-Sánchez, Erick J.; Romero, Juan M.
2017-02-01
The cable equation describes the voltage in a straight cylindrical cable, and this model has been employed to model electrical potential in dendrites and axons. However, sometimes this equation might give incorrect predictions for some realistic geometries, in particular when the radius of the cable changes significantly. Cables with a nonconstant radius are important for some phenomena, for example, discrete swellings along the axons appear in neurodegenerative diseases such as Alzheimers, Parkinsons, human immunodeficiency virus associated dementia, and multiple sclerosis. In this paper, using the Frenet-Serret frame, we propose a generalized cable equation for a general cable geometry. This generalized equation depends on geometric quantities such as the curvature and torsion of the cable. We show that when the cable has a constant circular cross section, the first fundamental form of the cable can be simplified and the generalized cable equation depends on neither the curvature nor the torsion of the cable. Additionally, we find an exact solution for an ideal cable which has a particular variable circular cross section and zero curvature. For this case we show that when the cross section of the cable increases the voltage decreases. Inspired by this ideal case, we rewrite the generalized cable equation as a diffusion equation with a source term generated by the cable geometry. This source term depends on the cable cross-sectional area and its derivates. In addition, we study different cables with swelling and provide their numerical solutions. The numerical solutions show that when the cross section of the cable has abrupt changes, its voltage is smaller than the voltage in the cylindrical cable. Furthermore, these numerical solutions show that the voltage can be affected by geometrical inhomogeneities on the cable.
NASA Astrophysics Data System (ADS)
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-07-01
This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
NASA Astrophysics Data System (ADS)
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-12-01
This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
Development, Evaluation, and Application of a Primary Aerosol Model.
Wang, I T; Chico, T; Huang, Y H; Farber, R J
1999-09-01
The Segmented-Plume Primary Aerosol Model (SPPAM) has been developed over the past several years. The earlier model development goals were simply to generalize the widely used Industrial Source Complex Short-Term (ISCST) model to simulate plume transport and dispersion under light wind conditions and to handle a large number of roadway or line sources. The goals have been expanded to include development of improved algorithm for effective plume transport velocity, more accurate and efficient line and area source dispersion algorithms, and recently, a more realistic and computationally efficient algorithm for plume depletion due to particle dry deposition. A performance evaluation of the SPPAM has been carried out using the 1983 PNL dual tracer experimental data. The results show the model predictions to be in good agreement with observations in both plume advection-dispersion and particulate matter (PM) depletion by dry deposition. For PM 2.5 impact analysis, the SPPAM has been applied to the Rubidoux area of California. Emission sources included in the modeling analysis are: paved road dust, diesel vehicular exhaust, gasoline vehicular exhaust, and tire wear particles from a large number of roadways in Rubidoux and surrounding areas. For the selected modeling periods, the predicted primary PM 2.5 to primary PM10 concentration ratios for the Rubidoux sampling station are in the range of 0.39-0.46. The organic fractions of the primary PM 2.5 impacts are estimated to be at least 34-41%. Detailed modeling results indicate that the relatively high organic fractions are primarily due to the proximity of heavily traveled roadways north of the sampling station. The predictions are influenced by a number of factors; principal among them are the receptor locations relative to major roadways, the volume and composition of traffic on these roadways, and the prevailing meteorological conditions.
Modelling of induced electric fields based on incompletely known magnetic fields
NASA Astrophysics Data System (ADS)
Laakso, Ilkka; De Santis, Valerio; Cruciani, Silvano; Campi, Tommaso; Feliziani, Mauro
2017-08-01
Determining the induced electric fields in the human body is a fundamental problem in bioelectromagnetics that is important for both evaluation of safety of electromagnetic fields and medical applications. However, existing techniques for numerical modelling of induced electric fields require detailed information about the sources of the magnetic field, which may be unknown or difficult to model in realistic scenarios. Here, we show how induced electric fields can accurately be determined in the case where the magnetic fields are known only approximately, e.g. based on field measurements. The robustness of our approach is shown in numerical simulations for both idealized and realistic scenarios featuring a personalized MRI-based head model. The approach allows for modelling of the induced electric fields in biological bodies directly based on real-world magnetic field measurements.
NASA Astrophysics Data System (ADS)
Astley, R. J.; Sugimoto, R.; Mustafi, P.
2011-08-01
Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1997-07-01
We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.
PTTI applications at the limits of GPS
NASA Technical Reports Server (NTRS)
Douglas, Rob J.; Popelar, J.
1995-01-01
Canadian plans for precise time and time interval services are examined in the light of GPS capabilities developed for geodesy. We present our experience in establishing and operating a geodetic type GPS station in a time laboratory setting, and show sub-nanosecond residuals for time transfer between geodetic sites. We present our approach to establishing realistic standard uncertainties for short-term frequency calibration services over time intervals of hours, and for longer-term frequency dissemination at better than the 10(exp -15) level of accuracy.
Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources
NASA Astrophysics Data System (ADS)
Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.
2017-09-01
We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction. We have developed a methodology for synthesizing physics-based broadband ground motion that incorporates the effects of realistic earthquake rupture along specific faults and the actual geology between the source and site.
Bertotti, Marcello; Frostick, Caroline; Hutt, Patrick; Sohanpal, Ratna; Carnes, Dawn
2018-05-01
This article adopts a realist approach to evaluate a social prescribing pilot in the areas of Hackney and City in London (United Kingdom). It unpacks the contextual factors and mechanisms that influenced the development of this pilot for the benefits of GPs, commissioners and practitioners, and reflects on the realist approach to evaluation as a tool for the evaluation of health interventions. Primary care faces considerable challenges including the increase in long-term conditions, GP consultation rates, and widening health inequalities. With its emphasis on linking primary care to non-clinical community services via a social prescribing coordinator (SPC), some models of social prescribing could contribute to reduce the burden on primary care, tackle health inequalities and encourage people to make greater use of non-clinical forms of support. This realist analysis was based on qualitative interviews with users, commissioners, a GP survey, focus groups and learning events to explore stakeholders' experience. To enable a detailed analysis, we adapted the realist approach by subdividing the social prescribing pathway into stages, each with contextual factors, mechanisms and outcomes. SPCs were pivotal to the effective functioning of the social prescribing service and responsible for the activation and initial beneficial impact on users. Although social prescribing shows significant potential for the benefit of patients and primary care, several challenges need to be considered and overcome, including 'buy in' from some GPs, branding, and funding for the third sector in a context where social care cuts are severely affecting the delivery of health care. With its emphasis on context and mechanisms, the realist evaluation approach is useful in understanding how to identify and improve health interventions, and analyse in greater detail the contribution of different stakeholders. As the SPC is central to social prescribing, more needs to be done to understand their role conceptually and practically.
Coupled microrings data buffer using fast light
NASA Astrophysics Data System (ADS)
Scheuer, Jacob; Shahriar, Selim
2013-03-01
We present a theoretical study of a trap-door optical buffer based on a coupled microrings add/drop filter (ADF) utilizing the white light cavity (WLC). The buffer "trap-door" can be opened and closed by tuning the resonances of the microrings comprising the ADF and trap/release optical pulses. We show that the WLC based ADF yields a maximally flat filter which exhibits superior performances in terms of bandwidth and flatness compared to previous design approaches. We also present a realistic, Silicon-over-Insulator based, design and performance analysis taking into consideration the realistic properties and limitations of the materials and the fabrication process, leading to delays exceeding 850ps for 80GHz bandwidth, and a corresponding delay-bandwidth product of approximately 70.
Design of helicopter rotor blades for optimum dynamic characteristics
NASA Technical Reports Server (NTRS)
Peters, D. A.; Ko, T.; Korn, A. E.; Rossow, M. P.
1982-01-01
The possibilities and the limitations of tailoring blade mass and stiffness distributions to give an optimum blade design in terms of weight, inertia, and dynamic characteristics are investigated. Changes in mass or stiffness distribution used to place rotor frequencies at desired locations are determined. Theoretical limits to the amount of frequency shift are established. Realistic constraints on blade properties based on weight, mass moment of inertia size, strength, and stability are formulated. The extent hub loads can be minimized by proper choice of EL distribution is determined. Configurations that are simple enough to yield clear, fundamental insights into the structural mechanisms but which are sufficiently complex to result in a realistic result for an optimum rotor blade are emphasized.
From concepts to clinical reality: an essay on the benchmarking of biomedical terminologies.
Smith, Barry
2006-06-01
It is only by fixing on agreed meanings of terms in biomedical terminologies that we will be in a position to achieve that accumulation and integration of knowledge that is indispensable to progress at the frontiers of biomedicine. Standardly, the goal of fixing meanings is seen as being realized through the alignment of terms on what are called 'concepts.' Part I addresses three versions of the concept-based approach--by Cimino, by Wüster, and by Campbell and associates--and surveys some of the problems to which they give rise, all of which have to do with a failure to anchor the terms in terminologies to corresponding referents in reality. Part II outlines a new, realist solution to this anchorage problem, which sees terminology construction as being motivated by the goal of alignment not on concepts but on the universals (kinds, types) in reality and thereby also on the corresponding instances (individuals, tokens). We outline the realist approach and show how on its basis we can provide a benchmark of correctness for terminologies which will at the same time allow a new type of integration of terminologies and electronic health records. We conclude by outlining ways in which the framework thus defined might be exploited for purposes of diagnostic decision-support.
Toxicological Evaluation of Realistic Emission Source Aerosols (TERESA): Introduction and overview
Godleski, John J.; Rohr, Annette C.; Kang, Choong M.; Diaz, Edgar A.; Ruiz, Pablo A.; Koutrakis, Petros
2013-01-01
Determining the health impacts of sources and components of fine particulate matter (PM2.5) is an important scientific goal. PM2.5 is a complex mixture of inorganic and organic constituents that are likely to differ in their potential to cause adverse health outcomes. The Toxicological Evaluation of Realistic Emissions of Source Aerosols (TERESA) study focused on two PM sources—coal-fired power plants and mobile sources—and sought to investigate the toxicological effects of exposure to emissions from these sources. The set of papers published here document the power plant experiments. TERESA attempted to delineate health effects of primary particles, secondary (aged) particles, and mixtures of these with common atmospheric constituents. TERESA involved withdrawal of emissions from the stacks of three coal-fired power plants in the United States. The emissions were aged and atmospherically transformed in a mobile laboratory simulating downwind power plant plume processing. Toxicological evaluations were carried out in laboratory rats exposed to different emission scenarios with extensive exposure characterization. The approach employed in TERESA was ambitious and innovative. Technical challenges included the development of stack sampling technology that prevented condensation of water vapor from the power plant exhaust during sampling and transfer, while minimizing losses of primary particles; development and optimization of a photochemical chamber to provide an aged aerosol for animal exposures; development and evaluation of a denuder system to remove excess gaseous components; and development of a mobile toxicology laboratory. This paper provides an overview of the conceptual framework, design, and methods employed in the study. PMID:21639692
Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.
Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens
2005-05-01
Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.
Electron percolation in realistic models of carbon nanotube networks
NASA Astrophysics Data System (ADS)
Simoneau, Louis-Philippe; Villeneuve, Jérémie; Rochefort, Alain
2015-09-01
The influence of penetrable and curved carbon nanotubes (CNT) on the charge percolation in three-dimensional disordered CNT networks have been studied with Monte-Carlo simulations. By considering carbon nanotubes as solid objects but where the overlap between their electron cloud can be controlled, we observed that the structural characteristics of networks containing lower aspect ratio CNT are highly sensitive to the degree of penetration between crossed nanotubes. Following our efficient strategy to displace CNT to different positions to create more realistic statistical models, we conclude that the connectivity between objects increases with the hard-core/soft-shell radii ratio. In contrast, the presence of curved CNT in the random networks leads to an increasing percolation threshold and to a decreasing electrical conductivity at saturation. The waviness of CNT decreases the effective distance between the nanotube extremities, hence reducing their connectivity and degrading their electrical properties. We present the results of our simulation in terms of thickness of the CNT network from which simple structural parameters such as the volume fraction or the carbon nanotube density can be accurately evaluated with our more realistic models.
An Investigation of the Impact of Guessing on Coefficient α and Reliability
2014-01-01
Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.
Interactions and triggering in a 3D rate and state asperity model
NASA Astrophysics Data System (ADS)
Dublanchet, P.; Bernard, P.
2012-12-01
Precise relocation of micro-seismicity and careful analysis of seismic source parameters have progressively imposed the concept of seismic asperities embedded in a creeping fault segment as being one of the most important aspect that should appear in a realistic representation of micro-seismic sources. Another important issue concerning micro-seismic activity is the existence of robust empirical laws describing the temporal and magnitude distribution of earthquakes, such as the Omori law, the distribution of inter-event time and the Gutenberg-Richter law. In this framework, this study aims at understanding statistical properties of earthquakes, by generating synthetic catalogs with a 3D, quasi-dynamic continuous rate and state asperity model, that takes into account a realistic geometry of asperities. Our approach contrasts with ETAS models (Kagan and Knopoff, 1981) usually implemented to produce earthquake catalogs, in the sense that the non linearity observed in rock friction experiments (Dieterich, 1979) is fully taken into account by the use of rate and state friction law. Furthermore, our model differs from discrete models of faults (Ziv and Cochard, 2006) because the continuity allows us to define realistic geometries and distributions of asperities by the assembling of sub-critical computational cells that always fail in a single event. Moreover, this model allows us to adress the question of the influence of barriers and distribution of asperities on the event statistics. After recalling the main observations of asperities in the specific case of Parkfield segment of San-Andreas Fault, we analyse earthquake statistical properties computed for this area. Then, we present synthetic statistics obtained by our model that allow us to discuss the role of barriers on clustering and triggering phenomena among a population of sources. It appears that an effective size of barrier, that depends on its frictional strength, controls the presence or the absence, in the synthetic catalog, of statistical laws that are similar to what is observed for real earthquakes. As an application, we attempt to draw a comparison between synthetic statistics and the observed statistics of Parkfield in order to characterize what could be a realistic frictional model of Parkfield area. More generally, we obtained synthetic statistical properties that are in agreement with power-law decays characterized by exponents that match the observations at a global scale, showing that our mechanical model is able to provide new insights into the understanding of earthquake interaction processes in general.
Methodological Developments in Geophysical Assimilation Modeling
NASA Astrophysics Data System (ADS)
Christakos, George
2005-06-01
This work presents recent methodological developments in geophysical assimilation research. We revisit the meaning of the term "solution" of a mathematical model representing a geophysical system, and we examine its operational formulations. We argue that an assimilation solution based on epistemic cognition (which assumes that the model describes incomplete knowledge about nature and focuses on conceptual mechanisms of scientific thinking) could lead to more realistic representations of the geophysical situation than a conventional ontologic assimilation solution (which assumes that the model describes nature as is and focuses on form manipulations). Conceptually, the two approaches are fundamentally different. Unlike the reasoning structure of conventional assimilation modeling that is based mainly on ad hoc technical schemes, the epistemic cognition approach is based on teleologic criteria and stochastic adaptation principles. In this way some key ideas are introduced that could open new areas of geophysical assimilation to detailed understanding in an integrated manner. A knowledge synthesis framework can provide the rational means for assimilating a variety of knowledge bases (general and site specific) that are relevant to the geophysical system of interest. Epistemic cognition-based assimilation techniques can produce a realistic representation of the geophysical system, provide a rigorous assessment of the uncertainty sources, and generate informative predictions across space-time. The mathematics of epistemic assimilation involves a powerful and versatile spatiotemporal random field theory that imposes no restriction on the shape of the probability distributions or the form of the predictors (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated) and accounts rigorously for the uncertainty features of the geophysical system. In the epistemic cognition context the assimilation concept may be used to investigate critical issues related to knowledge reliability, such as uncertainty due to model structure error (conceptual uncertainty).
Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation
NASA Astrophysics Data System (ADS)
Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco
2017-11-01
Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.
Potential effects of the fire protection system sprays at Browns Ferry on fission product transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemczyk, S.J.
1983-01-01
The fire protection system (FPS) sprays within any nuclear plant are not intended to mitigate radioactive releases to the environment resulting from severe core-damage accidents. However, it has been shown here that during certain postulated severe accident scenarios at the Browns Ferry Nuclear Plant, the functioning of FPS sprays could have a significant impact on the radioactive releases. Thus the effects of those sprays need to be taken into account for realistic estimation of source terms for some accident scenarios. The effects would include direct ones such as cooling of the reactor building atmosphere and scrubbing of radioactivity from it,more » as well as indirect effects such as an altered likelihood of hydrogen burning and flooding of various safety-related pumps in the reactor building basement. Thus some of the impacts of the sprays would be beneficial with respect to mitigating releases to the environment but some others might not be. The effects of the FPS would be very scenario dependent with a wide range of potential effects often existing for a given accident sequence. Any generalization of the specific results presented here for Browns Ferry to other nuclear plants must be done cautiously, as it appears from a preliminary investigation that the relevant physical and operational characteristics of FPS spray systems differ widely among even otherwise apparently similar plants. Likewise the standby gas treatment systems, which substantially impact the effects of the FPS, differ significantly among plants. More work for both Mark I plants and other plants, BWRs and PWRs alike, is indicated so the potential effects of FPS spray systems during severe accidents can be at least ball-parked for more realistic accident analyses.« less
Quantum energy teleportation in a quantum Hall system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yusa, Go; Izumida, Wataru; Hotta, Masahiro
2011-09-15
We propose an experimental method for a quantum protocol termed quantum energy teleportation (QET), which allows energy transportation to a remote location without physical carriers. Using a quantum Hall system as a realistic model, we discuss the physical significance of QET and estimate the order of energy gain using reasonable experimental parameters.
A Realist Evaluation Approach to Unpacking the Impacts of the Sentencing Guidelines
ERIC Educational Resources Information Center
Hunt, Kim Steven; Sridharan, Sanjeev
2010-01-01
Evaluations of complex interventions such as sentencing guidelines provide an opportunity to understand the mechanisms by which policies and programs can impact intermediate and long-term outcomes. There is limited previous discussion of the underlying frameworks by which sentencing guidelines can impact outcomes such as crime rates. Guided by a…
USDA-ARS?s Scientific Manuscript database
Current fiber intakes are alarmingly low, with long-term implications for public health related to risk of coronary heart disease, stroke, hypertension, certain gastrointestinal disorders, obesity, and the continuum of metabolic dysfunctions including prediabetes and type 2 diabetes. With more than ...
ERIC Educational Resources Information Center
Franck, Olof
2015-01-01
In this article, a religious education, which combines a respect for relevant critical demands, when analysing religious truth-claims, and a sensitivity to the need to avoid unwarranted criteriological constraints in the analysing process, is examined. Starting with an analytical comment upon Andrew Wright's critical realist approach in terms of a…
Flush of CO2 as a short-term biological indicator of soil nitrogen mineralization in the Southeast
USDA-ARS?s Scientific Manuscript database
Determining the appropriate nitrogen (N) rate is critical to farm economics and environmental protection. In North Carolina, N fertilizer recommendations are not modified by residual inorganic N or biologically active N, but only by realistic yield expectation set for each soil type by crop. However...
Setting the Stage for Physical Activity for Secondary Students
ERIC Educational Resources Information Center
Ciccomascolo, Lori; Riebe, Deborah
2006-01-01
Despite the positive long-term physiological and psychological effects of exercise, many young adults between the ages of 12 and 21 years do not participate in regular physical activity. With the time constraints and other challenges in teaching and assessing students, physical educators need realistic strategies that will help in their efforts to…
Pigeon homing from unfamiliar areas
Wallraff, Hans G
2014-01-01
The conclusion that pigeons and other birds can find their way home from unfamiliar areas by means of olfactory signals is well based on a variety of experiments and supporting investigations of the chemical atmosphere. Here I argue that alternative concepts proposing other sources of geopositional information are disproved by experimental findings or, at least, are not experimentally supported and hardly realistic. PMID:25346789
1989-03-01
management business with references for sources of more detailed information on each subject. Recipients are provided with updated and new fact sheets as...at Milestone Force, strategic offense, strategic defense; Army, 0 by the Defense Acquisition .oard An Acquisi- O close combat heavy, close combat...assessment of producibi’,ity: realistic competition. Conce-ts such as teaming, leader - industry surge and mobilization capacity: multiyear follower
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor
2013-01-01
Recent interest in hybrid RF/Optical communications has led to the development and installation of a "polished-panel" optical receiver evaluation assembly on the 34-meter research antenna at Deep-Space Station 13 (DSS-13) at NASA's Goldstone Communications Complex. The test setup consists of a custom aluminum panel polished to optical smoothness, and a large-sensor CCD camera designed to image the point-spread function (PSF) generated by the polished aluminum panel. Extensive data has been obtained via realtime tracking and imaging of planets and stars at DSS-13. Both "on-source" and "off-source" data were recorded at various elevations, enabling the development of realistic simulations and analytic models to help determine the performance of future deep-space communications systems operating with on-off keying (OOK) or pulse-position-modulated (PPM) signaling formats with photon-counting detection, and compared with the ultimate quantum bound on detection performance for these modulations. Experimentally determined PSFs were scaled to provide realistic signal-distributions across a photon-counting detector array when a pulse is received, and uncoded as well as block-coded performance analyzed and evaluated for a well-known class of block codes.
NASA Astrophysics Data System (ADS)
Popova, E. E.; Coward, A. C.; Nurser, G. A.; de Cuevas, B.; Fasham, M. J. R.; Anderson, T. R.
2006-12-01
A global general circulation model coupled to a simple six-compartment ecosystem model is used to study the extent to which global variability in primary and export production can be realistically predicted on the basis of advanced parameterizations of upper mixed layer physics, without recourse to introducing extra complexity in model biology. The "K profile parameterization" (KPP) scheme employed, combined with 6-hourly external forcing, is able to capture short-term periodic and episodic events such as diurnal cycling and storm-induced deepening. The model realistically reproduces various features of global ecosystem dynamics that have been problematic in previous global modelling studies, using a single generic parameter set. The realistic simulation of deep convection in the North Atlantic, and lack of it in the North Pacific and Southern Oceans, leads to good predictions of chlorophyll and primary production in these contrasting areas. Realistic levels of primary production are predicted in the oligotrophic gyres due to high frequency external forcing of the upper mixed layer (accompanying paper Popova et al., 2006) and novel parameterizations of zooplankton excretion. Good agreement is shown between model and observations at various JGOFS time series sites: BATS, KERFIX, Papa and HOT. One exception is the northern North Atlantic where lower grazing rates are needed, perhaps related to the dominance of mesozooplankton there. The model is therefore not globally robust in the sense that additional parameterizations are needed to realistically simulate ecosystem dynamics in the North Atlantic. Nevertheless, the work emphasises the need to pay particular attention to the parameterization of mixed layer physics in global ocean ecosystem modelling as a prerequisite to increasing the complexity of ecosystem models.
NASA Astrophysics Data System (ADS)
Kalinowski, Martin B.; Grosch, Martina; Hebel, Simon
2014-03-01
Emissions from medical isotope production are the most important source of background for atmospheric radioxenon measurements, which are an essential part of nuclear explosion monitoring. This article presents a new approach for estimating the global annual radioxenon emission inventory caused by medical isotope production using the amount of Tc-99m applications in hospitals as the basis. Tc-99m is the most commonly used isotope in radiology and dominates the medical isotope production. This paper presents the first estimate of the global production of Tc-99m. Depending on the production and transport scenario, global xenon emissions of 11-45 PBq/year can be derived from the global isotope demand. The lower end of this estimate is in good agreement with other estimations which are making use of reported releases and realistic process simulations. This proves the validity of the complementary assessment method proposed in this paper. It may be of relevance for future emission scenarios and for estimating the contribution to the global source term from countries and operators that do not make sufficient radioxenon release information available. It depends on sound data on medical treatments with radio-pharmaceuticals and on technical information on the production process of the supplier. This might help in understanding the apparent underestimation of the global emission inventory that has been found by atmospheric transport modelling.
Error analysis of satellite attitude determination using a vision-based approach
NASA Astrophysics Data System (ADS)
Carozza, Ludovico; Bevilacqua, Alessandro
2013-09-01
Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).
The automatic neutron guide optimizer guide_bot
NASA Astrophysics Data System (ADS)
Bertelsen, Mads
2017-09-01
The guide optimization software guide_bot is introduced, the main purpose of which is to reduce the time spent programming when performing numerical optimization of neutron guides. A limited amount of information on the overall guide geometry and a figure of merit describing the desired beam is used to generate the code necessary to solve the problem. A generated McStas instrument file performs the Monte Carlo ray-tracing, which is controlled by iFit optimization scripts. The resulting optimal guide is thoroughly characterized, both in terms of brilliance transfer from an idealized source and on a more realistic source such as the ESS Butterfly moderator. Basic MATLAB knowledge is required from the user, but no experience with McStas or iFit is necessary. This paper briefly describes how guide_bot is used and some important aspects of the code. A short validation against earlier work is performed which shows the expected agreement. In addition a scan over the vertical divergence requirement, where individual guide optimizations are performed for each corresponding figure of merit, provides valuable data on the consequences of this parameter. The guide_bot software package is best suited for the start of an instrument design project as it excels at comparing a large amount of different guide alternatives for a specific set of instrument requirements, but is still applicable in later stages as constraints can be used to optimize more specific guides.
NASA Astrophysics Data System (ADS)
Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.
2018-02-01
The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyang; Friedl, Mark A.; Schaaf, Crystal B.
2006-12-01
In the last two decades the availability of global remote sensing data sets has provided a new means of studying global patterns and dynamics in vegetation. The vast majority of previous work in this domain has used data from the Advanced Very High Resolution Radiometer, which until recently was the primary source of global land remote sensing data. In recent years, however, a number of new remote sensing data sources have become available that have significantly improved the capability of remote sensing to monitor global ecosystem dynamics. In this paper, we describe recent results using data from NASA's Moderate Resolution Imaging Spectroradiometer to study global vegetation phenology. Using a novel new method based on fitting piecewise logistic models to time series data from MODIS, key transition dates in the annual cycle(s) of vegetation growth can be estimated in an ecologically realistic fashion. Using this method we have produced global maps of seven phenological metrics at 1-km spatial resolution for all ecosystems exhibiting identifiable annual phenologies. These metrics include the date of year for (1) the onset of greenness increase (greenup), (2) the onset of greenness maximum (maturity), (3) the onset of greenness decrease (senescence), and (4) the onset of greenness minimum (dormancy). The three remaining metrics are the growing season minimum, maximum, and summation of the enhanced vegetation index derived from MODIS. Comparison of vegetation phenology retrieved from MODIS with in situ measurements shows that these metrics provide realistic estimates of the four transition dates identified above. More generally, the spatial distribution of phenological metrics estimated from MODIS data is qualitatively realistic, and exhibits strong correspondence with temperature patterns in mid- and high-latitude climates, with rainfall seasonality in seasonally dry climates, and with cropping patterns in agricultural areas.
Global and Regional Impacts of HONO on the Chemical Composition of Clouds and Aerosols
NASA Technical Reports Server (NTRS)
Elshorbany, Y. F.; Crutzen, P. J.; Steil, B.; Pozzer, A.; Tost, H.; Lelieveld, J.
2014-01-01
Recently, realistic simulation of nitrous acid (HONO) based on the HONO/NO(sub x) ratio of 0.02 was found to have a significant impact on the global budgets of HO(sub x) (OH + HO2) and gas phase oxidation products in polluted regions, especially in winter when other photolytic sources are of minor importance. It has been reported that chemistry-transport models underestimate sulphate concentrations, mostly during winter. Here we show that simulating realistic HONO levels can significantly enhance aerosol sulphate (S(VI)) due to the increased formation of H2SO4. Even though in-cloud aqueous phase oxidation of dissolved SO2 (S(IV)) is the main source of S(VI), it appears that HONO related enhancement of H2O2 does not significantly affect sulphate because of the predominantly S(IV) limited conditions, except over eastern Asia. Nitrate is also increased via enhanced gaseous HNO3 formation and N2O5 hydrolysis on aerosol particles. Ammonium nitrate is enhanced in ammonia-rich regions but not under ammonia-limited conditions. Furthermore, particle number concentrations are also higher, accompanied by the transfer from hydrophobic to hydrophilic aerosol modes. This implies a significant impact on the particle lifetime and cloud nucleating properties. The HONO induced enhancements of all species studied are relatively strong in winter though negligible in summer. Simulating realistic HONO levels is found to improve the model measurement agreement of sulphate aerosols, most apparent over the US. Our results underscore the importance of HONO for the atmospheric oxidizing capacity and corroborate the central role of cloud chemical processing in S(IV) formation.
Fiberfox: facilitating the creation of realistic white matter software phantoms.
Neher, Peter F; Laun, Frederik B; Stieltjes, Bram; Maier-Hein, Klaus H
2014-11-01
Phantom-based validation of diffusion-weighted image processing techniques is an important key to innovation in the field and is widely used. Openly available and user friendly tools for the flexible generation of tailor-made datasets for the specific tasks at hand can greatly facilitate the work of researchers around the world. We present an open-source framework, Fiberfox, that enables (1) the intuitive definition of arbitrary artificial white matter fiber tracts, (2) signal generation from those fibers by means of the most recent multi-compartment modeling techniques, and (3) simulation of the actual MR acquisition that allows for the introduction of realistic MRI-related effects into the final image. We show that real acquisitions can be closely approximated by simulating the acquisition of the well-known FiberCup phantom. We further demonstrate the advantages of our framework by evaluating the effects of imaging artifacts and acquisition settings on the outcome of 12 tractography algorithms. Our findings suggest that experiments on a realistic software phantom might change the conclusions drawn from earlier hardware phantom experiments. Fiberfox may find application in validating and further developing methods such as tractography, super-resolution, diffusion modeling or artifact correction. Copyright © 2013 Wiley Periodicals, Inc.
Design principles and optimal performance for molecular motors under realistic constraints
NASA Astrophysics Data System (ADS)
Tu, Yuhai; Cao, Yuansheng
2018-02-01
The performance of a molecular motor, characterized by its power output and energy efficiency, is investigated in the motor design space spanned by the stepping rate function and the motor-track interaction potential. Analytic results and simulations show that a gating mechanism that restricts forward stepping in a narrow window in configuration space is needed for generating high power at physiologically relevant loads. By deriving general thermodynamics laws for nonequilibrium motors, we find that the maximum torque (force) at stall is less than its theoretical limit for any realistic motor-track interactions due to speed fluctuations. Our study reveals a tradeoff for the motor-track interaction: while a strong interaction generates a high power output for forward steps, it also leads to a higher probability of wasteful spontaneous back steps. Our analysis and simulations show that this tradeoff sets a fundamental limit to the maximum motor efficiency in the presence of spontaneous back steps, i.e., loose-coupling. Balancing this tradeoff leads to an optimal design of the motor-track interaction for achieving a maximum efficiency close to 1 for realistic motors that are not perfectly coupled with the energy source. Comparison with existing data and suggestions for future experiments are discussed.
Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.
Huson, Daniel H; Linz, Simone
2018-01-01
A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.
NASA Astrophysics Data System (ADS)
Šimkanin, Ján; Kyselica, Juraj
2017-12-01
Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annette Rohr
2006-03-01
TERESA (Toxicological Evaluation of Realistic Emissions of Source Aerosols) involves exposing laboratory rats to realistic coal-fired power plant and mobile source emissions to help determine the relative toxicity of these PM sources. There are three coal-fired power plants in the TERESA program; this report describes the results of fieldwork conducted at the first plant, located in the Upper Midwest. The project was technically challenging by virtue of its novel design and requirement for the development of new techniques. By examining aged, atmospherically transformed aerosol derived from power plant stack emissions, we were able to evaluate the toxicity of PM derivedmore » from coal combustion in a manner that more accurately reflects the exposure of concern than existing methodologies. TERESA also involves assessment of actual plant emissions in a field setting--an important strength since it reduces the question of representativeness of emissions. A sampling system was developed and assembled to draw emissions from the stack; stack sampling conducted according to standard EPA protocol suggested that the sampled emissions are representative of those exiting the stack into the atmosphere. Two mobile laboratories were then outfitted for the study: (1) a chemical laboratory in which the atmospheric aging was conducted and which housed the bulk of the analytical equipment; and (2) a toxicological laboratory, which contained animal caging and the exposure apparatus. Animal exposures were carried out from May-November 2004 to a number of simulated atmospheric scenarios. Toxicological endpoints included (1) pulmonary function and breathing pattern; (2) bronchoalveolar lavage fluid cytological and biochemical analyses; (3) blood cytological analyses; (4) in vivo oxidative stress in heart and lung tissue; and (5) heart and lung histopathology. Results indicated no differences between exposed and control animals in any of the endpoints examined. Exposure concentrations for the scenarios utilizing secondary particles (oxidized emissions) ranged from 70-256 {micro}g/m{sup 3}, and some of the atmospheres contained high acidity levels (up to 49 {micro}g/m{sup 3} equivalent of sulfuric acid). However, caution must be used in generalizing these results to other power plants utilizing different coal types and with different plant configurations, as the emissions may vary based on these factors.« less
Mukherjee, Joyeeta Mitra; Hutton, Brian F; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A
2014-01-01
Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference (MSD), mutual information (MI), normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation (NCC) and entropy of the difference (EDI). Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial resolution. Further the visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in 1 of the 5 patient studies, and with external-surrogate based correction in 3 out of 5 patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality. PMID:24107647
Wolf, M M; Braukmann, C J; Ramp, K A
1987-01-01
The past 20 years have been productive ones for the field of applied behavior analysis. A brief review of our own efforts during this period reveals that we have accomplished several but not all of our goals for the Teaching-Family approach. In this context, we note that the setting of realistic and appropriate goals is important for the field and for society. Moreover, we suggest that the realistic goal for some persons with serious delinquent behavior may be extended supportive and socializing treatment rather than permanent cure from conventional short-term treatment programs. We base this suggestion on the accumulating evidence that serious delinquent behavior may often be part of a significantly disabling and durable condition that consists of multiple antisocial and dysfunctional behaviors, often runs in families, and robustly eludes effective short-term treatment. Like other significant disabilities such as retardation, autism, and blindness, the effects of this condition may be a function of an interaction of environmental and constitutional variables. We argue that our field has the wherewithal to construct effective and humane long-term supportive environments for seriously delinquent youths. In this regard, we explore the dimensions, rationales, logistics, and beginnings of a new treatment direction that involves long-term supportive family treatment. We contend that such supportive families may be able to provide long, perhaps even lifetime, socializing influences through models, values, and contingencies that seem essential for developing and maintaining prosocial behavior in these high-risk youths. PMID:3323156
Higgins, Angela; O'Halloran, Peter; Porter, Sam
2015-09-01
The success of measures to reduce long-term sickness absence (LTSA) in public sector organisations is contingent on organisational context. This realist evaluation investigates how interventions interact with context to influence successful management of LTSA. Multi-method case study in three Health and Social Care Trusts in Northern Ireland comprising realist literature review, semi-structured interviews (61 participants), Process-Mapping and feedback meetings (59 participants), observation of training, analysis of documents. Important activities included early intervention; workplace-based occupational rehabilitation; robust sickness absence policies with clear trigger points for action. Used appropriately, in a context of good interpersonal and interdepartmental communication and shared goals, these are able to increase the motivation of staff to return to work. Line managers are encouraged to take a proactive approach when senior managers provide support and accountability. Hindering factors: delayed intervention; inconsistent implementation of policy and procedure; lack of resources; organisational complexity; stakeholders misunderstanding each other's goals and motives. Different mechanisms have the potential to encourage common motivations for earlier return from LTSA, such as employees feeling that they have the support of their line manager to return to work and having the confidence to do so. Line managers' proactively engage when they have confidence in the support of seniors and in their own ability to address LTSA. Fostering these motivations calls for a thoughtful, diagnostic process, taking into account the contextual factors (and whether they can be modified) and considering how a given intervention can be used to trigger the appropriate mechanisms.
Radioisotope Power Sources for MEMS Devices,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, J.P.
2001-06-17
Microelectromechanical systems (MEMS) comprise a rapidly expanding research field with potential applications varying from sensors in airbags to more recent optical applications. Depending on the application, these devices often require an on-board power source for remote operation, especially in cases requiring operation for an extended period of time. Previously suggested power sources include fossil fuels and solar energy, but nuclear power sources may provide significant advantages for certain applications. Hence, the objective of this study is to establish the viability of using radioisotopes to power realistic MEMS devices. A junction-type battery was constructed using silicon and a {sup 63}Ni liquidmore » source. A source volume containing 64 {micro}Ci provided a power of {approx}0.07 nW. A more novel application of nuclear sources for MEMS applications involves the creation of a resonator that is driven by charge collection in a cantilever beam. Preliminary results have established the feasibility of this concept, and future work will optimize the design for various applications.« less
The impact of realistic source shape and flexibility on source mask optimization
NASA Astrophysics Data System (ADS)
Aoyama, Hajime; Mizuno, Yasushi; Hirayanagi, Noriyuki; Kita, Naonori; Matsui, Ryota; Izumi, Hirohiko; Tajima, Keiichi; Siebert, Joachim; Demmerle, Wolfgang; Matsuyama, Tomoyuki
2013-04-01
Source mask optimization (SMO) is widely used to make state-of-the-art semiconductor devices in high volume manufacturing. To realize mature SMO solutions in production, the Intelligent Illuminator, which is an illumination system on Nikon scanner, is useful because it can provide generation of freeform sources with high fidelity to the target. Proteus SMO, which employs co-optimization method and an insertion of validation with mask 3D effect and resist properties for an accurate prediction of wafer printing, can take into account the properties of Intelligent Illuminator. We investigate an impact of the source properties on the SMO to pattern of a static-random access memory. Quality of a source made on the scanner compared to the SMO target is evaluated with in-situ measurement and aerial image simulation using its measurement data. Furthermore we discuss an evaluation of a universality of the source to use it in multiple scanners with a validation with estimated value of scanner errors.
NASA Astrophysics Data System (ADS)
Gebhart, T. E.; Martinez-Rodriguez, R. A.; Baylor, L. R.; Rapp, J.; Winfrey, A. L.
2017-08-01
To produce a realistic tokamak-like plasma environment in linear plasma device, a transient source is needed to deliver heat and particle fluxes similar to those seen in an edge localized mode (ELM). ELMs in future large tokamaks will deliver heat fluxes of ˜1 GW/m2 to the divertor plasma facing components at a few Hz. An electrothermal plasma source can deliver heat fluxes of this magnitude. These sources operate in an ablative arc regime which is driven by a DC capacitive discharge. An electrothermal source was configured with two pulse lengths and tested under a solenoidal magnetic field to determine the resulting impact on liner ablation, plasma parameters, and delivered heat flux. The arc travels through and ablates a boron nitride liner and strikes a tungsten plate. The tungsten target plate is analyzed for surface damage using a scanning electron microscope.
Sparse EEG/MEG source estimation via a group lasso
Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor
2017-01-01
Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790
Re-Innovating Recycling for Turbulent Boundary Layer Simulations
NASA Astrophysics Data System (ADS)
Ruan, Joseph; Blanquart, Guillaume
2017-11-01
Historically, turbulent boundary layers along a flat plate have been expensive to simulate numerically, in part due to the difficulty of initializing the inflow with ``realistic'' turbulence, but also due to boundary layer growth. The former has been resolved in several ways, primarily dedicating a region of at least 10 boundary layer thicknesses in width to rescale and recycle flow or by extending the region far enough downstream to allow a laminar flow to develop into turbulence. Both of these methods are relatively costly. We propose a new method to remove the need for an inflow region, thus reducing computational costs significantly. Leveraging the scale similarity of the mean flow profiles, we introduce a coordinate transformation so that the boundary layer problem can be solved as a parallel flow problem with additional source terms. The solutions in the new coordinate system are statistically homogeneous in the downstream direction and so the problem can be solved with periodic boundary conditions. The present study shows the stability of this method, its implementation and its validation for a few laminar and turbulent boundary layer cases.
Assessment of oxidative stress in serum by d-ROMs test.
Kilk, K; Meitern, R; Härmson, O; Soomets, U; Hõrak, P
2014-08-01
Assessment of oxidative stress is an important but technically challenging procedure in medical and biological research. The reactive oxygen metabolites (d-ROMs) test is a simple assay marketed for analyzing the total amount of hydroperoxides in serum via the Fenton's reaction. Earlier reports have raised a suspicion that a part of the signal detected in the assay comes from sources other than metabolites generated by oxidative stress. The aim of this study was to identify which serum components interfere with the d-ROMs signal. By application of sodium azide, ethylenediaminetetraacetic acid, sodium dodecylsulphate, varying temperature, and spiking endogenous substances we demonstrate that in the case of mammalian sera the assay determines ceruloplasmin (CP) activity with potential interferences from hydroperoxides, iron level, thiols, and albumin. In sera of avian species hydroperoxides contribute more to the test outcome, but the CP part is insensitive to inhibition by azide. In conclusion, this assay has deficiencies in terms of detecting realistic concentrations of hydroperoxides, is mostly measuring CP and is also interfered with other serum components, making it very difficult to interpret in most biological systems.
NASA Technical Reports Server (NTRS)
Toksoz, M. Nafi
1987-01-01
The long term objective of this project is to interpret NASA's Crustal Dynamics measurements (SLR) in the Eastern Mediterranean region in terms of relative plate motions and intraplate deformation. The approach is to combine realistic modeling studies with an analysis of available geophysical and geological observations to provide a framework for interpreting NASA's measurements. This semi-annual report concentrates on recent results regarding the tectonics of Anatolia and surrounding regions from ground based observations. Also briefly reported on is progress made in using GPS measurements to densify SLR observations in the Eastern Mediterranean.
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
NASA Technical Reports Server (NTRS)
Quinlan, Jesse R.; Drozda, Tomasz G.; McDaniel, James C.; Lacaze, Guilhem; Oefelein, Joseph
2015-01-01
In an effort to make large eddy simulation of hydrocarbon-fueled scramjet combustors more computationally accessible using realistic chemical reaction mechanisms, a compressible flamelet/progress variable (FPV) model was proposed that extends current FPV model formulations to high-speed, compressible flows. Development of this model relied on observations garnered from an a priori analysis of the Reynolds-Averaged Navier-Stokes (RANS) data obtained for the Hypersonic International Flight Research and Experimentation (HI-FiRE) dual-mode scramjet combustor. The RANS data were obtained using a reduced chemical mechanism for the combustion of a JP-7 surrogate and were validated using avail- able experimental data. These RANS data were then post-processed to obtain, in an a priori fashion, the scalar fields corresponding to an FPV-based modeling approach. In the current work, in addition to the proposed compressible flamelet model, a standard incompressible FPV model was also considered. Several candidate progress variables were investigated for their ability to recover static temperature and major and minor product species. The effects of pressure and temperature on the tabulated progress variable source term were characterized, and model coupling terms embedded in the Reynolds- averaged Navier-Stokes equations were studied. Finally, results for the novel compressible flamelet/progress variable model were presented to demonstrate the improvement attained by modeling the effects of pressure and flamelet boundary conditions on the combustion.
NASA Astrophysics Data System (ADS)
Caetano, Marco Antonio Leonel; Gherardi, Douglas Francisco Marcolino; Yoneyama, Takashi
2013-11-01
Socioeconomic-driven processes such as deforestation, forest degradation, forest fires, overgrazing, overharvesting of fuelwood and slash-and-burn practices constitute the primary sources of Greenhouse Gases (GHG) emissions in developing countries. Climate policies can induce the development of clean technology and offer incentives to accelerate reforestation. The Brazilian government has already acknowledged the urgency to invest in policies to reduce anthropogenic CO2 emissions in the Legal Brazilian Amazon (BA). In this work, we propose a scheme to estimate the required investments in clean technology and reforestation to achieve a prescribed short term target value for the atmospheric CO2 emission. Initially, a mathematical model is fitted to the available data to allow forecasting the values of the short term emissions of CO2 under a combination of investments in clean technology and reforestation. The investments to reduce the emissions of CO2 below a target value (400 million tons/year, starting at the initial value of 450) in 3 years’ time are proportional to the regional GDP. Using computer simulation it is possible to generate a range of possible investment values in clean technology and reforestation, so that the prescribed emission reduction is achieved without hindering economic growth. This strategy provides the necessary investment flexibility for the implementation of realistic climate policies.
Environmental degradation of composites for marine structures: new materials and new applications
2016-01-01
This paper describes the influence of seawater ageing on composites used in a range of marine structures, from boats to tidal turbines. Accounting for environmental degradation is an essential element in the multi-scale modelling of composite materials but it requires reliable test data input. The traditional approach to account for ageing effects, based on testing samples after immersion for different periods, is evolving towards coupled studies involving strong interactions between water diffusion and mechanical loading. These can provide a more realistic estimation of long-term behaviour but still require some form of acceleration if useful data, for 20 year lifetimes or more, are to be obtained in a reasonable time. In order to validate extrapolations from short to long times, it is essential to understand the degradation mechanisms, so both physico-chemical and mechanical test data are required. Examples of results from some current studies on more environmentally friendly materials including bio-sourced composites will be described first. Then a case study for renewable marine energy applications will be discussed. In both cases, studies were performed first on coupons at the material level, then during structural testing and analysis of large components, in order to evaluate their long-term behaviour. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242304
A realistic evaluation: the case of protocol-based care
2010-01-01
Background 'Protocol based care' was envisioned by policy makers as a mechanism for delivering on the service improvement agenda in England. Realistic evaluation is an increasingly popular approach, but few published examples exist, particularly in implementation research. To fill this gap, within this paper we describe the application of a realistic evaluation approach to the study of protocol-based care, whilst sharing findings of relevance about standardising care through the use of protocols, guidelines, and pathways. Methods Situated between positivism and relativism, realistic evaluation is concerned with the identification of underlying causal mechanisms, how they work, and under what conditions. Fundamentally it focuses attention on finding out what works, for whom, how, and in what circumstances. Results In this research, we were interested in understanding the relationships between the type and nature of particular approaches to protocol-based care (mechanisms), within different clinical settings (context), and what impacts this resulted in (outcomes). An evidence review using the principles of realist synthesis resulted in a number of propositions, i.e., context, mechanism, and outcome threads (CMOs). These propositions were then 'tested' through multiple case studies, using multiple methods including non-participant observation, interviews, and document analysis through an iterative analysis process. The initial propositions (conjectured CMOs) only partially corresponded to the findings that emerged during analysis. From the iterative analysis process of scrutinising mechanisms, context, and outcomes we were able to draw out some theoretically generalisable features about what works, for whom, how, and what circumstances in relation to the use of standardised care approaches (refined CMOs). Conclusions As one of the first studies to apply realistic evaluation in implementation research, it was a good fit, particularly given the growing emphasis on understanding how context influences evidence-based practice. The strengths and limitations of the approach are considered, including how to operationalise it and some of the challenges. This approach provided a useful interpretive framework with which to make sense of the multiple factors that were simultaneously at play and being observed through various data sources, and for developing explanatory theory about using standardised care approaches in practice. PMID:20504293
Nyssen, Olga P; Taylor, Stephanie J C; Wong, Geoff; Steed, Elizabeth; Bourke, Liam; Lord, Joanne; Ross, Carol A; Hayman, Sheila; Field, Victoria; Higgins, Ailish; Greenhalgh, Trisha; Meads, Catherine
2016-04-01
Writing therapy to improve physical or mental health can take many forms. The most researched model of therapeutic writing (TW) is unfacilitated, individual expressive writing (written emotional disclosure). Facilitated writing activities are less widely researched. Databases, including MEDLINE, EMBASE, PsycINFO, Linguistics and Language Behaviour Abstracts, Allied and Complementary Medicine Database and Cumulative Index to Nursing and Allied Health Literature, were searched from inception to March 2013 (updated January 2015). Four TW practitioners provided expert advice. Study procedures were conducted by one reviewer and checked by a second. Randomised controlled trials (RCTs) and non-randomised comparative studies were included. Quality was appraised using the Cochrane risk-of-bias tool. Unfacilitated and facilitated TW studies were analysed separately under International Classification of Diseases, Tenth Revision chapter headings. Meta-analyses were performed where possible using RevMan version 5.2.6 (RevMan 2012, The Cochrane Collaboration, The Nordic Cochrane Centre, Copenhagen, Denmark). Costs were estimated from a UK NHS perspective and three cost-consequence case studies were prepared. Realist synthesis followed Realist and Meta-narrative Evidence Synthesis: Evolving Standards guidelines. To review the clinical effectiveness and cost-effectiveness of TW for people with long-term conditions (LTCs) compared with no writing, or other controls, reporting any relevant clinical outcomes. To conduct a realist synthesis to understand how TW might work, and for whom. From 14,658 unique citations, 284 full-text papers were reviewed and 64 studies (59 RCTs) were included in the final effectiveness reviews. Five studies examined facilitated TW; these were extremely heterogeneous with unclear or high risk of bias but suggested that facilitated TW interventions may be beneficial in individual LTCs. Unfacilitated expressive writing was examined in 59 studies of variable or unreported quality. Overall, there was very little or no evidence of any benefit reported in the following conditions (number of studies): human immunodeficiency virus (six); breast cancer (eight); gynaecological and genitourinary cancers (five); mental health (five); asthma (four); psoriasis (three); and chronic pain (four). In inflammatory arthropathies (six) there was a reduction in disease severity [n = 191, standardised mean difference (SMD) -0.61, 95% confidence interval (CI) -0.96 to -0.26] in the short term on meta-analysis of four studies. For all other LTCs there were either no data, or sparse data with no or inconsistent, evidence of benefit. Meta-analyses conducted across all of the LTCs provided no evidence that unfacilitated emotional writing had any effect on depression at short- (n = 1563, SMD -0.06, 95% CI -0.29 to 0.17, substantial heterogeneity) or long-term (n = 778, SMD -0.04 95% CI -0.18 to 0.10, little heterogeneity) follow-up, or on anxiety, physiological or biomarker-based outcomes. One study reported costs, no studies reported cost-effectiveness and 12 studies reported resource use; and meta-analysis suggested reduced medication use but no impact on health centre visits. Estimated costs of intervention were low, but there was insufficient evidence to judge cost-effectiveness. Realist synthesis findings suggested that facilitated TW is a complex intervention and group interaction contributes to the perception of benefit. It was unclear from the available data who might benefit most from facilitated TW. Difficulties with developing realist synthesis programme theory meant that mechanisms operating during TW remain obscure. Overall, there is little evidence to support the therapeutic effectiveness or cost-effectiveness of unfacilitated expressive writing interventions in people with LTCs. Further research focused on facilitated TW in people with LTCs could be informative. This study is registered as PROSPERO CRD42012003343. The National Institute for Health Research Health Technology Assessment programme.
Fundamental Investigations and Rational Design of Durable High-Performance SOFC Cathodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yu; Ding, Dong; Wei, Tao
The main objective of this project is to unravel the degradation mechanism of LSCF cathodes under realistic operating conditions with different types of contaminants, aiming towards the rational design of cathodes with high-performance and enhanced durability by combining a porous backbone (such as LSCF) with a thin catalyst coating. The mechanistic understanding will help us to optimize the composition and morphology of the catalyst layer and microstructure of the LSCF backbone for better performance and durability. More specifically, the technical objectives include: (1) to unravel the degradation mechanism of LSCF cathodes under realistic operating conditions with different types of contaminantsmore » using in situ and ex situ measurements performed on specially-designed cathodes; (2) to examine the microstructural and compositional evolution of LSCF cathodes as well as the cathode/electrolyte interfaces under realistic operating conditions; (3) to correlate the fuel cell performance instability and degradation with the microstructural and morphological evolution and surface chemistry change of the cathode under realistic operating conditions; (4) to explore new catalyst materials and electrode structures to enhance the stability of the LSCF cathode under realistic operating conditions; and (5) to validate the long term stability of the modified LSCF cathode in commercially available cells under realistic operating conditions. We have systematically evaluated LSCF cathodes in symmetrical cells and anode supported cells under realistic conditions with different types of contaminants such as humidity, CO 2, and Cr. Electrochemical models for the design of test cells and understanding of mechanisms have been developed for the exploration of fundamental properties of electrode materials. It is demonstrated that the activity and stability of LSCF cathodes can be degraded by the introduction of contaminants. The microstructural and compositional evolution of LSCF cathodes as well as the cathode/electrolyte interfaces under realistic operating conditions has been studied. It is found that SrO readily segregated/enriched on the LSCF surface. More severe contamination conditions cause more SrO on surface. Novel catalyst coatings through particle depositions (PrOx) or continuous thin films (PNM) were successfully developed to improve the activity and stability of LSCF cathodes. Finally, we have demonstrated enhanced activity and stability of LSCF cathodes over longer periods of time in homemade and commercially available cells by an optimized PNM (dense film and particles) infiltration process, under clean air and realistic operating conditions (3% H 2O, 5% CO 2 and direct Crofer contact). Both performance and durability of single cells with PNM coating has been enhanced compared with those without coating. Raman analysis of cathodes surface indicated that the intensity of SrCrO 4 was significantly decreased.« less
Is islet transplantation a realistic approach to curing diabetes?
Jin, Sang-Man; Kim, Kwang-Won
2017-01-01
Since the report of type 1 diabetes reversal in seven consecutive patients by the Edmonton protocol in 2000, pancreatic islet transplantation has been reappraised based on accumulated clinical evidence. Although initially expected to therapeutically target long-term insulin independence, islet transplantation is now indicated for more specific clinical benefits. With the long-awaited report of the first phase 3 clinical trial in 2016, allogeneic islet transplantation is now transitioning from an experimental to a proven therapy for type 1 diabetes with problematic hypoglycemia. Islet autotransplantation has already been therapeutically proven in chronic pancreatitis with severe abdominal pain refractory to conventional treatments, and it holds promise for preventing diabetes after partial pancreatectomy due to benign pancreatic tumors. Based on current evidence, this review focuses on islet transplantation as a realistic approach to treating diabetes.
Conducting feasibilities in clinical trials: an investment to ensure a good study.
Rajadhyaksha, Viraj
2010-07-01
Conducting clinical trial feasibility is one of the first steps in clinical trial conduct. This process includes assessing internal and environmental capacity, alignment of the clinical trial in terms of study design, dose of investigational product, comparator, patient type, with the local environment and assessing potential of conducting clinical trial in a specific country. A robust feasibility also ensures a realistic assessment and capability to conduct the clinical trial. For local affiliates of pharmaceutical organizations, and contract research organizations, this is a precursor to study placement and influences the decision of study placement. This article provides details on different types of feasibilities, information which is to be included and relevance of each. The article also aims to provide practical hands-on suggestions to make feasibilities more realistic and informative.
Experimental Measurement-Device-Independent Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Liu, Yang; Chen, Teng-Yun; Wang, Liu-Jun; Liang, Hao; Shentu, Guo-Liang; Wang, Jian; Cui, Ke; Yin, Hua-Lei; Liu, Nai-Le; Li, Li; Ma, Xiongfeng; Pelc, Jason S.; Fejer, M. M.; Peng, Cheng-Zhi; Zhang, Qiang; Pan, Jian-Wei
2013-09-01
Quantum key distribution is proven to offer unconditional security in communication between two remote users with ideal source and detection. Unfortunately, ideal devices never exist in practice and device imperfections have become the targets of various attacks. By developing up-conversion single-photon detectors with high efficiency and low noise, we faithfully demonstrate the measurement-device-independent quantum-key-distribution protocol, which is immune to all hacking strategies on detection. Meanwhile, we employ the decoy-state method to defend attacks on a nonideal source. By assuming a trusted source scenario, our practical system, which generates more than a 25 kbit secure key over a 50 km fiber link, serves as a stepping stone in the quest for unconditionally secure communications with realistic devices.
Experimental measurement-device-independent quantum key distribution.
Liu, Yang; Chen, Teng-Yun; Wang, Liu-Jun; Liang, Hao; Shentu, Guo-Liang; Wang, Jian; Cui, Ke; Yin, Hua-Lei; Liu, Nai-Le; Li, Li; Ma, Xiongfeng; Pelc, Jason S; Fejer, M M; Peng, Cheng-Zhi; Zhang, Qiang; Pan, Jian-Wei
2013-09-27
Quantum key distribution is proven to offer unconditional security in communication between two remote users with ideal source and detection. Unfortunately, ideal devices never exist in practice and device imperfections have become the targets of various attacks. By developing up-conversion single-photon detectors with high efficiency and low noise, we faithfully demonstrate the measurement-device-independent quantum-key-distribution protocol, which is immune to all hacking strategies on detection. Meanwhile, we employ the decoy-state method to defend attacks on a nonideal source. By assuming a trusted source scenario, our practical system, which generates more than a 25 kbit secure key over a 50 km fiber link, serves as a stepping stone in the quest for unconditionally secure communications with realistic devices.
T. Handley; Nancy Grulke
2008-01-01
We examined the short-term separate and combined effects of simulated nitrogen (N) deposition (fertilization) and ozone (O3) exposure on California black oak seedlings (Quercus kelloggii Newb.), an ecologically important tree of the San Bernardino Mountains downwind of Los Angeles. Realistic concentrations of O3...
Exploring the Usage of a Video Application Tool: Experiences in Film Studies
ERIC Educational Resources Information Center
Ali, Nazlena Mohamad; Smeaton, Alan F.
2011-01-01
This paper explores our experiences in deploying a video application tool in film studies, and its evaluation in terms of realistic contextual end-users who have real tasks to perform in a real environment. We demonstrate our experiences and core lesson learnt in deploying our novel movie browser application with undergraduate and graduate…
A More Literate Georgia: An Agenda for Action. A Report by the Dean's Literacy Task Force.
ERIC Educational Resources Information Center
Georgia Univ., Athens. Coll. of Education.
The essays contained in this document, which launches the University of Georgia Education Initiative, attempt to address Georgia's need for increased literacy in realistic and constructive terms. Taken together, these essays constitute an agenda for action--a challenge to all those who wish to provide Georgians with the quality education they…
ERIC Educational Resources Information Center
Cevik, Yasemin Demiraslan; Andre, Thomas
2013-01-01
This study was aimed at comparing the impact of three types of case-based approaches (worked example, faded work example, and case-based reasoning) on preservice teachers' decision making and reasoning skills related to realistic classroom management situations. Participants in this study received a short-term implementation of one of these three…
Worked Examples Leads to Better Performance in Analyzing and Solving Real-Life Decision Cases
ERIC Educational Resources Information Center
Cevik, Yasemin Demiraslan; Andre, Thomas
2012-01-01
This study compared the impact of three types of case-based methods (worked example, faded worked example, and case-based reasoning) on preservice teachers' (n=71) decision making and reasoning related to realistic classroom management situations. Participants in this study received a short-term implementation of one of these three major…
ERIC Educational Resources Information Center
King, Michael A.
2009-01-01
Business intelligence derived from data warehousing and data mining has become one of the most strategic management tools today, providing organizations with long-term competitive advantages. Business school curriculums and popular database textbooks cover data warehousing, but the examples and problem sets typically are small and unrealistic. The…
ERIC Educational Resources Information Center
Sanford, Rania
2009-01-01
Purpose. The purpose of this study was to determine the relationships between Sedlacek's (2004b) student noncognitive variables (positive self-concept, realistic self-appraisal, successfully handling the system, preference for long-term goals, leadership experience, presence of a strong support person, community service, and knowledge of the…
ERIC Educational Resources Information Center
Doernberg, Nanette L.
Counseling approaches for parents of young retarded children are reviewed. Guidelines are suggested for helping parents as they proceed through stages of protective denial (in which they need clear information about retardation), expectations of a cure (in which their attempts to further the child's development must be realistically balanced),…
ERIC Educational Resources Information Center
Friederichs, Hendrik; Weissenstein, Anne; Ligges, Sandra; Möller, David; Becker, Jan C.; Marschall, Bernhard
2014-01-01
Auscultation torsos are widely used to teach position-dependent heart sounds and murmurs. To provide a more realistic teaching experience, both whole body auscultation mannequins and torsos have been used in clinical examination skills training at the Medical Faculty of the University of Muenster since the winter term of 2008-2009. This training…
Violence in Music Videos: Examining the Prevalence and Context of Physical Aggression.
ERIC Educational Resources Information Center
Smith, Stacy L.; Boyson, Aaron R.
2002-01-01
Examines violence in music video programming. Reveals that 15% of music videos feature violence, and most of that aggression is sanitized, not chastised, and presented in realistic contexts. Discusses the findings in terms of the risk that exposure to violence in each channel and genre may be posing to viewers' learning of aggression, fear, and…
ERIC Educational Resources Information Center
Sager, Fritz
2008-01-01
The dual system of vocational training, utilising both company training and vocational school attendance, is generally acknowledged to be a successful model for reducing youth unemployment. However, the decreasing number of training opportunities in countries with this system poses a crisis for the approach. One strategy for overcoming the problem…
Feasibility study for a realistic training dedicated to radiological protection improvement
NASA Astrophysics Data System (ADS)
Courageot, Estelle; Reinald, Kutschera; Gaillard-Lecanu, Emmanuelle; Sylvie, Jahan; Riedel, Alexandre; Therache, Benjamin
2014-06-01
Any personnel involved in activities within the controlled area of a nuclear facility must be provided with appropriate radiological protection training. An evident purpose of this training is to know the regulation dedicated to workplaces where ionizing radiation may be present, in order to properly carry out the radiation monitoring, to use suitable protective equipments and to behave correctly if unexpected working conditions happen. A major difficulty of this training consist in having the most realistic reading from the monitoring devices for a given exposure situation, but without using real radioactive sources. A new approach is developed at EDF R&D for radiological protection training. This approach combines different technologies, in an environment representative of the workplace but geographically separated from the nuclear power plant: a training area representative of a workplace, a Man Machine Interface used by the trainer to define the source configuration and the training scenario, a geolocalization system, fictive radiation monitoring devices and a particle transport code able to calculate in real time the dose map due to the virtual sources. In a first approach, our real-time particles transport code, called Moderato, used only an attenuation low in straight line. To improve the realism further, we would like to switch a code based on the Monte Carlo transport of particles method like Geant 4 or MCNPX instead of Moderato. The aim of our study is the evaluation of the code in our application, in particular, the possibility to keep a real time response of our architecture.
A synthetic dataset for evaluating soft and hard fusion algorithms
NASA Astrophysics Data System (ADS)
Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey
2011-06-01
There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.
Reactive Swarm Formation Control Using Realistic Surface Vessel Dynamics and Environmental Effects
2012-05-10
hour per response, including the time for reviewing instructions. searching existing data sources. gathering and maintaining the data needed, and...1997, 2001, 2003, % % %(#) Department fo Automation % Danish Technical University, DTU . % DK 2800 Kgs. Lyngby, Denmark...tristan.perez@ntnu.no % Date: 2005-05-04 % Comments: %Adapted from the files reference (*) to match the data of the vessel %design of ADI-Limited
ERIC Educational Resources Information Center
Hess, Carol Lakey
2009-01-01
This article argues that realist and tragic fiction can and should play a central role in Religious Education in communities of faith and in theological education in schools of theology--thereby contributing to theological construction--"because good fiction produces truth". Fiction is a vital source for producing the questions that theology needs…
Planar concentrators near the étendue limit.
Winston, Roland; Gordon, Jeffrey M
2005-10-01
Recently proposed aplanatic imaging designs are integrally combined with nonimaging flux boosters to produce an ultracompact planar glass-filled concentrator that performs near the étendue limit. Such optical devices are attractive for high-efficiency multijunction photovoltaics at high flux, with realistic power generation of 1 W from a 1 mm2 cell. When deployed in reverse, our designs provide collimation even for high-numerical-aperture light sources.
Planar concentrators near the étendue limit
NASA Astrophysics Data System (ADS)
Winston, Roland; Gordon, Jeffrey M.
2005-10-01
Recently proposed aplanatic imaging designs are integrally combined with nonimaging flux boosters to produce an ultracompact planar glass-filled concentrator that performs near the étendue limit. Such optical devices are attractive for high-efficiency multijunction photovoltaics at high flux, with realistic power generation of 1 W from a 1 mm² cell. When deployed in reverse, our designs provide collimation even for high-numerical-aperture light sources.
An Acoustic Source Reactive to Tow Cable Strum
2012-09-21
sound wave radiates from the head mass. Dkt . No. 101720 Application No. ?? REPLACEMENT SHEET? /3 DRAFT 1 CABLE CURVATURE INDUCING LONGITUDINAL...MOTION IDEALIZED TOW CABLE (NO TRANSVERSE VIBRATION) REALISTIC TOW CABLE (INCLUDES TRANSVERSE VIBRATION) DIRECTION OF TOW FIG. 1 (PRIOR ART) Dkt . No...DISPLACEMENT DISPLACEMENT LONGITUDINAL (PRIOR ART) DISPLACEMENT LONGITUDINAL Dkt . No. 101720 Application No. ?? REPLACEMENT SHEET? /3 DRAFT 10 A B B A
Cognitive and Neural Bases of Skilled Performance.
1987-10-04
advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd
The role of blood vessels in high-resolution volume conductor head modeling of EEG.
Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T
2016-03-01
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O`Fallon, J.
The author looks in a broad perspective at funding for high energy physics programs over the period from the 1960`s to today. He tries to look at this in the large perspective, nationally and internationally, and then gives more detailed information for different laboratories and programs. In general funding peaked in the 1960`s, and has been going downward since then. This is not only in terms of adjusted dollars, but in terms of the sense in which the programs are funded on realistic time scales to allow them to come to a rapid completion.
Hardware Based Technology Assessment in Support of Near-Term Space Fission Missions
NASA Technical Reports Server (NTRS)
Houts, Mike; VanDyke, Melissa; Godfroy, Tom; Martin, James; BraggSitton, Shannon; Carter, Robert; Dickens, Ricky; Salvail, Pat; Williams, Eric; Harper, Roger
2003-01-01
Fission technology can enable rapid, affordable access to any point in the solar system. If fission propulsion systems are to be developed to their full potential; however, near-term customers must be identified and initial fission systems successfully developed, launched, and utilized. Successful utilization will most likely occur if frequent, significant hardware-based milestones can be achieved throughout the program. Achieving these milestones will depend on the capability to perform highly realistic non-nuclear testing of nuclear systems. This paper discusses ongoing and potential research that could help achieve these milestones.
Detecting large-scale networks in the human brain using high-density electroencephalography.
Liu, Quanying; Farahibozorg, Seyedehrezvan; Porcaro, Camillo; Wenderoth, Nicole; Mantini, Dante
2017-09-01
High-density electroencephalography (hdEEG) is an emerging brain imaging technique that can be used to investigate fast dynamics of electrical activity in the healthy and the diseased human brain. Its applications are however currently limited by a number of methodological issues, among which the difficulty in obtaining accurate source localizations. In particular, these issues have so far prevented EEG studies from reporting brain networks similar to those previously detected by functional magnetic resonance imaging (fMRI). Here, we report for the first time a robust detection of brain networks from resting state (256-channel) hdEEG recordings. Specifically, we obtained 14 networks previously described in fMRI studies by means of realistic 12-layer head models and exact low-resolution brain electromagnetic tomography (eLORETA) source localization, together with independent component analysis (ICA) for functional connectivity analysis. Our analyses revealed three important methodological aspects. First, brain network reconstruction can be improved by performing source localization using the gray matter as source space, instead of the whole brain. Second, conducting EEG connectivity analyses in individual space rather than on concatenated datasets may be preferable, as it permits to incorporate realistic information on head modeling and electrode positioning. Third, the use of a wide frequency band leads to an unbiased and generally accurate reconstruction of several network maps, whereas filtering data in a narrow frequency band may enhance the detection of specific networks and penalize that of others. We hope that our methodological work will contribute to rise of hdEEG as a powerful tool for brain research. Hum Brain Mapp 38:4631-4643, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Using technology to engage hospitalised patients in their care: a realist review.
Roberts, Shelley; Chaboyer, Wendy; Gonzalez, Ruben; Marshall, Andrea
2017-06-06
Patient participation in health care is associated with improved outcomes for patients and hospitals. New technologies are creating vast potential for patients to participate in care at the bedside. Several studies have explored patient use, satisfaction and perceptions of health information technology (HIT) interventions in hospital. Understanding what works for whom, under what conditions, is important when considering interventions successfully engaging patients in care. This realist review aimed to determine key features of interventions using bedside technology to engage hospital patients in their care and analyse these in terms of context, mechanisms and outcomes. A realist review was chosen to explain how and why complex HIT interventions work or fail within certain contexts. The review was guided by Pawson's realist review methodology, involving: clarifying review scope; searching for evidence; data extraction and evidence appraisal; synthesising evidence and drawing conclusions. Author experience and an initial literature scope provided insight and review questions and theories (propositions) around why interventions worked were developed and iteratively refined. A purposive search was conducted to find evidence to support, refute or identify further propositions, which formed an explanatory model. Each study was 'mined' for evidence to further develop the propositions and model. Interactive learning was the overarching theme of studies using technology to engage patients in their care. Several propositions underpinned this, which were labelled: information sharing; self-assessment and feedback; tailored education; user-centred design; and support in use of HIT. As studies were mostly feasibility or usability studies, they reported patient-centred outcomes including patient acceptability, satisfaction and actual use of HIT interventions. For each proposition, outcomes were proposed to come about by mechanisms including improved communication, shared decision-making, empowerment and self-efficacy; which acted as facilitators to patient participation in care. Overall, there was a stronger representation of health than IT disciplines in studies reviewed, with a lack of IT input in terms of theoretical underpinning, methodological design and reporting of outcomes. HIT interventions have great potential for engaging hospitalised patients in their care. However, stronger interdisciplinary collaboration between health and IT researchers is needed for effective design and evaluation of HIT interventions.
Young, Rachel
2015-09-01
Social media users post messages about health goals and behaviors to online social networks. Compared with more traditional sources of health communication such as physicians or health journalists, peer sources are likely to be perceived as more socially close or similar, which influences how messages are processed. This experimental study uses construal level theory of psychological distance to predict how mediated health messages from peers influence health-related cognition and behavioral intention. Participants were exposed to source cues that identified peer sources as being either highly attitudinally and demographically similar to or different from participants. As predicted by construal level theory, participants who perceived sources of social media health messages as highly similar listed a greater proportion of beliefs about the feasibility of health behaviors and a greater proportion of negative beliefs, while participants who perceived sources as more dissimilar listed a greater proportion of positive beliefs about the health behaviors. Results of the study could be useful in determining how health messages from peers could encourage individuals to set realistic health goals.
Courtiol, Alexandre; Ferdy, Jean Baptiste; Godelle, Bernard; Raymond, Michel; Claude, Julien
2010-05-01
Many studies use representations of human body outlines to study how individual characteristics, such as height and body mass, affect perception of body shape. These typically involve reality-based stimuli (e.g., pictures) or manipulated stimuli (e.g., drawings). These two classes of stimuli have important drawbacks that limit result interpretations. Realistic stimuli vary in terms of traits that are correlated, which makes it impossible to assess the effect of a single trait independently. In addition, manipulated stimuli usually do not represent realistic morphologies. We describe and examine a method based on elliptic Fourier descriptors to automatically predict and represent body outlines for a given set of predicted variables (e.g., sex, height, and body mass). We first estimate whether these predictive variables are significantly related to human outlines. We find that height and body mass significantly influence body shape. Unlike height, the effect of body mass on shape differs between sexes. Then, we show that we can easily build a regression model that creates hypothetical outlines for an arbitrary set of covariates. These statistically computed outlines are quite realistic and may be used as stimuli in future studies.
Bartlett, Yvonne K; Haywood, Annette; Bentley, Claire L; Parker, Jack; Hawley, Mark S; Mountain, Gail A; Mawson, Susan
2014-11-25
Technology has the potential to provide support for self-management to people with congestive heart failure (CHF). This paper describes the results of a realist evaluation of the SMART Personalised Self-Management System (PSMS) for CHF. The PSMS was used, at home, by seven people with CHF. Data describing system usage and usability as well as questionnaire and interview data were evaluated in terms of the context, mechanism and outcome hypotheses (CMOs) integral to realist evaluation. The CHF PSMS improved heart failure related knowledge in those with low levels of knowledge at baseline, through providing information and quizzes. Furthermore, participants perceived the self-regulatory aspects of the CHF PSMS as being useful in encouraging daily walking. The CMOs were revised to describe the context of use, and how this influences both the mechanisms and the outcomes. Participants with CHF engaged with the PSMS despite some technological problems. Some positive effects on knowledge were observed as well as the potential to assist with changing physical activity behaviour. Knowledge of CHF and physical activity behaviour change are important self-management targets for CHF, and this study provides evidence to direct the further development of a technology to support these targets.
Spin dynamics modeling in the AGS based on a stepwise ray-tracing method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutheil, Yann
The AGS provides a polarized proton beam to RHIC. The beam is accelerated in the AGS from Gγ= 4.5 to Gγ = 45.5 and the polarization transmission is critical to the RHIC spin program. In the recent years, various systems were implemented to improve the AGS polarization transmission. These upgrades include the double partial snakes configuration and the tune jumps system. However, 100% polarization transmission through the AGS acceleration cycle is not yet reached. The current efficiency of the polarization transmission is estimated to be around 85% in typical running conditions. Understanding the sources of depolarization in the AGS ismore » critical to improve the AGS polarized proton performances. The complexity of beam and spin dynamics, which is in part due to the specialized Siberian snake magnets, drove a strong interest for original methods of simulations. For that, the Zgoubi code, capable of direct particle and spin tracking through field maps, was here used to model the AGS. A model of the AGS using the Zgoubi code was developed and interfaced with the current system through a simple command: the AgsFromSnapRampCmd. Interfacing with the machine control system allows for fast modelization using actual machine parameters. Those developments allowed the model to realistically reproduce the optics of the AGS along the acceleration ramp. Additional developments on the Zgoubi code, as well as on post-processing and pre-processing tools, granted long term multiturn beam tracking capabilities: the tracking of realistic beams along the complete AGS acceleration cycle. Beam multiturn tracking simulations in the AGS, using realistic beam and machine parameters, provided a unique insight into the mechanisms behind the evolution of the beam emittance and polarization during the acceleration cycle. Post-processing softwares were developed to allow the representation of the relevant quantities from the Zgoubi simulations data. The Zgoubi simulations proved particularly useful to better understand the polarization losses through horizontal intrinsic spin resonances The Zgoubi model as well as the tools developed were also used for some direct applications. For instance, some beam experiment simulations allowed an accurate estimation of the expected polarization gains from machine changes. In particular, the simulations that involved involved the tune jumps system provided an accurate estimation of polarization gains and the optimum settings that would improve the performance of the AGS.« less
Lifetime testing UV LEDs for use in the LISA charge management system
NASA Astrophysics Data System (ADS)
Hollington, D.; Baird, J. T.; Sumner, T. J.; Wass, P. J.
2017-10-01
As a future charge management light source, UV light-emitting diodes (UV LEDs) offer far superior performance in a range of metrics compared to the mercury lamps used in the past. As part of a qualification program a number of short wavelength UV LEDs have been subjected to a series of lifetime tests for potential use on the laser interferometer space antenna (LISA) mission. These tests were performed at realistic output levels for both fast and continuous discharging in either a DC or pulsed mode of operation and included a DC fast discharge test spanning 50 days, a temperature dependent pulsed fast discharge test spanning 21 days and a pulsed continuous discharge test spanning 507 days. Two types of UV LED have demonstrated lifetimes equivalent to over 25 years of realistic mission usage with one type providing a baseline for LISA and the other offering a backup solution.
Establishing Realistic Patient Expectations Following Total Knee Arthroplasty.
Husain, Adeel; Lee, Gwo-Chin
2015-12-01
Nearly 20% of patients are dissatisfied following well-performed total knee arthroplasty with good functional outcomes. Surgeons must understand the drivers of dissatisfaction to minimize the number of unhappy patients following surgery. Several studies have shown that unfulfilled expectations are a principal source of patient dissatisfaction. Patients contemplating total knee arthroplasty expect pain relief, improved walking ability, return to sports, and improvement in psychological well-being and social interactions. However, patients are typically overly optimistic with regard to expected outcomes following surgery. Patient expectations and satisfaction can be influenced by age, socioeconomic factors, sex, and race. The interplay of these factors can be complex and specific to each person. Published data on clinical and functional outcomes show that persistence of symptoms, such as pain, stiffness, and failure to return to preoperative levels of function, are common and normal. Therefore, the surgeon needs to help the patient to establish realistic expectations. Copyright 2015 by the American Academy of Orthopaedic Surgeons.
Malone, Emma; Jehl, Markus; Arridge, Simon; Betcke, Timo; Holder, David
2014-06-01
We investigate the application of multifrequency electrical impedance tomography (MFEIT) to imaging the brain in stroke patients. The use of MFEIT could enable early diagnosis and thrombolysis of ischaemic stroke, and therefore improve the outcome of treatment. Recent advances in the imaging methodology suggest that the use of spectral constraints could allow for the reconstruction of a one-shot image. We performed a simulation study to investigate the feasibility of imaging stroke in a head model with realistic conductivities. We introduced increasing levels of modelling errors to test the robustness of the method to the most common sources of artefact. We considered the case of errors in the electrode placement, spectral constraints, and contact impedance. The results indicate that errors in the position and shape of the electrodes can affect image quality, although our imaging method was successful in identifying tissues with sufficiently distinct spectra.
Seismic data acquisition through tubing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buettner, H M; Jervis, M
1999-07-01
We have collected good quality crosswell seismic data through production tubing in active oil fields at realistic interwell distances (300 ft). The data were collected at the Aera Cymric field (1998) and at a Chevron site (1997); both located in the Central Valley of California. The Aera data were used to produce travel-time tomographic images of the interwell region. Both sites have similar geology, namely siliceous shale (diatomite) with moderate to highly attenuating reservoir rocks. In addition we confirmed modeling predictions that typical tubing attenuation losses are on the order of 12 dB. We expect that the use of strongermore » sources and tube wave suppression will allow for crosswell imaging at realistic distances even for low Q or high noise situations. We are searching for an industrial partner now for a data collection in the gas wells of the San Juan Basin or South Texas.« less
Simulating a transmon implementation of the surface code, Part II
NASA Astrophysics Data System (ADS)
O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo
The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Magnetic structure of the crust
NASA Technical Reports Server (NTRS)
Wasilewski, P.
1985-01-01
The bibuniqueness aspect of geophysical interpretation must be constrained by geological insight to limit the range of theoretically possible models. An additional step in depth understanding of the relationship between rock magnetization and geological circumstances on a grand scale is required. Views about crustal structure and the distribution of lithologies suggests a complex situation with lateral and vertical variability at all levels in the crust. Volcanic, plutonic, and metamorphic processes together with each of the observed anomalies. Important questions are addressed: (1) the location of the magnetic bottom; (2) whether the source is a discrete one or are certain parts of the crust cumulatively contributing to the overall magnetization; (3) if the anomaly to some recognizable surface expression is localized, how to arrive at a geologically realistic model incorporating magnetization contrasts which are realistic; (3) in the way the primary mineralogies are altered by metamorphism and the resulting magnetic contracts; (4) the effects of temperature and pressure on magnetization.
NASA Astrophysics Data System (ADS)
Rodgers, Keith B.; Latif, Mojib; Legutke, Stephanie
2000-09-01
The sensitivity of the thermal structure of the equatorial Pacific and Indian Ocean pycnoclines to a model's representation of the Indonesian Straits connecting the two basins is investigated. Two integrations are performed using the global HOPE ocean model. The initial conditions and surface forcing for both cases are identical; the only difference between the runs is that one has an opening for the Indonesian Straits which spans the equator on the Pacific side, and the other has an opening which lies fully north of the equator. The resulting sensitivity throughout much of the upper ocean is greater than 0.5°C for both the equatorial Indian and Pacific. A realistic simulation of net Indonesian Throughflow (ITF) transport (measured in Sverdrups) is not sufficient for an adequate simulation of equatorial watermasses. The ITF must also contain a realistic admixture of northern and southern Pacific source water.
NASA Astrophysics Data System (ADS)
Power, C.; Gerhard, J. I.; Tsourlos, P.; Giannopoulos, A.
2011-12-01
Remediation programs for sites contaminated with dense non-aqueous phase liquids (DNAPLs) would benefit from an ability to non-intrusively map the evolving volume and extent of the DNAPL source zone. Electrical resistivity tomography (ERT) is a well-established geophysical tool, widely used outside the remediation industry, that has significant potential for mapping DNAPL source zones. However, that potential has not been realized due to challenges in data interpretation from contaminated sites - in either a qualitative or quantitative way. The objective of this study is to evaluate the potential of ERT to map realistic, evolving DNAPL source zones within complex subsurface environments during remedial efforts. For this purpose, a novel coupled model was developed that integrates a multiphase flow model (DNAPL3D-MT), which generates realistic DNAPL release scenarios, with 3DINV, an ERT model which calculates the corresponding resistivity response. This presentation will describe the developed model coupling methodology, which integrates published petrophysical relationships to generate an electrical resistivity field that accounts for both the spatial heterogeneity of subsurface soils and the evolving spatial distribution of fluids (including permeability, porosity, clay content and air/water/DNAPL saturation). It will also present an example in which the coupled model was employed to explore the ability of ERT to track the remediation of a DNAPL source zone. A field-scale, three-dimensional release of chlorinated solvent DNAPL into heterogeneous clayey sand was simulated, including the subsurface migration and subsequent removal of the DNAPL source zone via dissolution in groundwater. Periodic surveys of this site via ERT applied at the surface were then simulated and inversion programs were used to calculate the subsurface distribution of electrical properties. This presentation will summarize this approach and its potential as a research tool exploring the range of site conditions under which ERT may prove useful in aiding DNAPL site remediation. Moreover, it is expected to provide a cost-effective avenue to test optimum ERT data acquisition, inversion and interpretative tools at contaminated sites.
NASA Astrophysics Data System (ADS)
Brogi, F.; Malaspinas, O.; Bonadonna, C.; Chopard, B.; Ripepe, M.
2015-12-01
Low frequency (< 20Hz) acoustic measurements have a great potential for the real time characterization of volcanic plume source parameters. Using the classical source theory, acoustic data can be related to the exit velocity of the volcanic jet and to mass eruption rate, based on the geometric constrain of the vent and the mixture density. However, the application of the classical acoustic source models to volcanic explosive eruptions has shown to be challenging and a better knowledge of the link between the acoustic radiation and actual volcanic fluid dynamics processes is required. New insights into this subject could be given by the study of realistic aeroacoustic numerical simulations of a volcanic jet. Lattice Boltzmann strategies (LBS) provide the opportunity to develop an accurate, computationally fast, 3D physical model for a volcanic jet. In the field of aeroacoustic applications, dedicated LBS has been proven to have the low dissipative properties needed for capturing the weak acoustic pressure fluctuations. However, due to the big disparity in magnitude between the flow and the acoustic disturbances, even weak spurious noise sources in simulations can ruin the accuracy of the acoustic predictions. Reflected waves from artificial boundaries defined around the flow region can have significant influence on the flow field and overwhelm the acoustic field of interest. In addition, for highly multiscale turbulent flows, such as volcanic plumes, the number of grid points needed to represent the smallest scales might become intractable and the most complicated physics happen only in small portions of the computational domain. The implementation of the grid refinement, in our model allow us to insert local finer grids only where is actually needed and to increase the size of the computational domain for running more realistic simulations. 3D LBS model simulations for turbulent jet aeroacoustics have been accurately validated. Both mean flow and acoustic results are in good agreement with theory and experimental data available in the literature.
Thermal Nonequilibrium in Hypersonic Separated Flow
2014-12-22
flow duration and steadiness. 15. SUBJECT TERMS Hypersonic Flowfield Measurements, Laser Diagnostics of Gas Flow, Laser Induced...extent than the NS computation. While it would be convenient to believe that the more physically realistic flow modeling of the DSMC gas - surface...index and absorption coefficient. Each of the curves was produced assuming a 0.5 % concentration of lithium at the Condition A nozzle exit conditions
Verified compilation of Concurrent Managed Languages
2017-11-01
designs for compiler intermediate representations that facilitate mechanized proofs and verification; and (d) a realistic case study that combines these...ideas to prove the correctness of a state-of- the-art concurrent garbage collector. 15. SUBJECT TERMS Program verification, compiler design ...Even though concurrency is a pervasive part of modern software and hardware systems, it has often been ignored in safety-critical system designs . A
A revised econometric model of the domestic pallet market
Albert T. Schuler; Walter B. Wallin
1983-01-01
The purpose of this revised model is to project estimates of consumption and price of wooden pallets in the short term. This model differs from previous ones developed by Schuler and Wallin (1979 and 1980) in the following respects: The structure of the supply side of the market is more realistically identified (from an economic theory point of view) by including...
Harvesting cost model for small trees in natural stands in the interior northwest.
Bruce R. Hartsough; Xiaoshan Zhang; Roger D. Fight
2001-01-01
Realistic logging cost models are needed for long-term forest management planning. Data from numerous published studies were combined to estimate the costs of harvesting small trees in natural stands in the Interior Northwest of North America. Six harvesting systems were modeled. Four address gentle terrain: manual log-length, manual whole-tree, mechanized whole-tree,...
NASA Technical Reports Server (NTRS)
Owen, W. A.
1984-01-01
Low thermal efficiencies in solar receivers are discussed in terms of system design. It is recommended that careful attention be given to the overall thermal systems design, especially to conductive losses about the window and areas of relatively thin insulation. If the cavity design is carefully managed to insure a small, minimally reradiating aperture, the goal of a very high efficiency cavity receiver is a realistic one.
ERIC Educational Resources Information Center
Young, Michael
2007-01-01
This paper is part of the ongoing work of the author and others in developing a social realist theory of knowledge for educational studies. It contrasts Durkheim and Vygotsky's theories and why both are important for educational theory. It begins by emphasizing the similarities between them; that knowledge has to be understood in terms of its…
Moral Particularism and Deontic Logic
NASA Astrophysics Data System (ADS)
Parent, Xavier
The aim of this paper is to strengthen the point made by Horty about the relationship between reason holism and moral particularism. In the literature prima facie obligations have been considered as the only source of reason holism. I strengthen Horty's point in two ways. First, I show that contrary-to-duties provide another independent support for reason holism. Next I outline a formal theory that is able to capture these two sources of holism. While in simple settings the proposed account coincides with Horty's one, this is not true in more complicated or "realistic" settings in which more than two norms collide. My chosen formalism is so-called input/output logic.
Ambient fine particulate matter in China: Its negative impacts and possible countermeasures.
Qi, Zihan; Chen, Tingjia; Chen, Jiang; Qi, Xiaofei
2018-03-01
In recent decades, China has experienced rapid economic development accompanied by increasing concentrations of ambient PM 2.5 , particulate matter of less than 2.5 μm in diameter. PM 2.5 is now believed to be a carcinogen, causing higher lung cancer risks and generating losses to the economy and society. This meta-analysis evaluates the losses generated by ambient PM 2.5 in Suzhou from 2014 to 2016 and predicts losses at different concentrations. Estimations of total losses in Beijing, Shanghai, Hangzhou, Guangzhou, Dalian, and Xiamen are also presented, with a total national loss in 2015. The authors then demonstrate that lowering ambient PM 2.5 concentrations would be a realistic way for China to reduce the evaluated social losses in the short term. Possible legal measures are listed for lowering ambient PM 2.5 concentrations. The present findings quantify the economic effects of ambient PM 2.5 due to the increased incidence rate and mortality rate of lung cancer. Lowering ambient PM 2.5 concentrations would be the most realistic way for China to reduce tghe evaluated social losses in the short term. Possible legal measures for lowering ambient PM 2.5 concentrations to reduce the total losses are identified.
Realistic absorption coefficient of each individual film in a multilayer architecture
NASA Astrophysics Data System (ADS)
Cesaria, M.; Caricato, A. P.; Martino, M.
2015-02-01
A spectrophotometric strategy, termed multilayer-method (ML-method), is presented and discussed to realistically calculate the absorption coefficient of each individual layer embedded in multilayer architectures without reverse engineering, numerical refinements and assumptions about the layer homogeneity and thickness. The strategy extends in a non-straightforward way a consolidated route, already published by the authors and here termed basic-method, able to accurately characterize an absorbing film covering transparent substrates. The ML-method inherently accounts for non-measurable contribution of the interfaces (including multiple reflections), describes the specific film structure as determined by the multilayer architecture and used deposition approach and parameters, exploits simple mathematics, and has wide range of applicability (high-to-weak absorption regions, thick-to-ultrathin films). Reliability tests are performed on films and multilayers based on a well-known material (indium tin oxide) by deliberately changing the film structural quality through doping, thickness-tuning and underlying supporting-film. Results are found consistent with information obtained by standard (optical and structural) analysis, the basic-method and band gap values reported in the literature. The discussed example-applications demonstrate the ability of the ML-method to overcome the drawbacks commonly limiting an accurate description of multilayer architectures.
Minimal hardware Bluetooth tracking for long-term at-home elder supervision.
Kelly, Damian; McLoone, Sean; Farrell, Ronan
2010-01-01
The ability to automatically detect the location of an elder within their own home is a significant enabler of remote elder supervision and interaction applications. This location information is typically generated via a myriad of sensors throughout the home environment. Even with high sensor redundancy, there are still situations where traditional elder monitoring systems are unable to resolve the location of the elder. This work develops a minimal infrastructure radio-frequency localisation system for long-term elder location tracking. An RFID room-labelling technique is employed and with it, the localisation system developed in this work is shown to exhibit superior performance to more traditional localisation systems in realistic long-term deployments.
Pigeon homing from unfamiliar areas: An alternative to olfactory navigation is not in sight.
Wallraff, Hans G
2014-01-01
The conclusion that pigeons and other birds can find their way home from unfamiliar areas by means of olfactory signals is well based on a variety of experiments and supporting investigations of the chemical atmosphere. Here I argue that alternative concepts proposing other sources of geopositional information are disproved by experimental findings or, at least, are not experimentally supported and hardly realistic.
On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier
2013-01-01
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582
Design and Test Plans for a Non-Nuclear Fission Power System Technology Demonstration Unit
NASA Technical Reports Server (NTRS)
Mason, Lee; Palac, Donald; Gibson, Marc; Houts, Michael; Warren, John; Werner, James; Poston, David; Qualls, Arthur Lou; Radel, Ross; Harlow, Scott
2012-01-01
A joint National Aeronautics and Space Administration (NASA) and Department of Energy (DOE) team is developing concepts and technologies for affordable nuclear Fission Power Systems (FPSs) to support future exploration missions. A key deliverable is the Technology Demonstration Unit (TDU). The TDU will assemble the major elements of a notional FPS with a non-nuclear reactor simulator (Rx Sim) and demonstrate system-level performance in thermal vacuum. The Rx Sim includes an electrical resistance heat source and a liquid metal heat transport loop that simulates the reactor thermal interface and expected dynamic response. A power conversion unit (PCU) generates electric power utilizing the liquid metal heat source and rejects waste heat to a heat rejection system (HRS). The HRS includes a pumped water heat removal loop coupled to radiator panels suspended in the thermal-vacuum facility. The basic test plan is to subject the system to realistic operating conditions and gather data to evaluate performance sensitivity, control stability, and response characteristics. Upon completion of the testing, the technology is expected to satisfy the requirements for Technology Readiness Level 6 (System Demonstration in an Operational and Relevant Environment) based on the use of high-fidelity hardware and prototypic software tested under realistic conditions and correlated with analytical predictions.
On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier
2013-08-09
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.
Design and Test Plans for a Non-Nuclear Fission Power System Technology Demonstration Unit
NASA Astrophysics Data System (ADS)
Mason, L.; Palac, D.; Gibson, M.; Houts, M.; Warren, J.; Werner, J.; Poston, D.; Qualls, L.; Radel, R.; Harlow, S.
A joint National Aeronautics and Space Administration (NASA) and Department of Energy (DOE) team is developing concepts and technologies for affordable nuclear Fission Power Systems (FPSs) to support future exploration missions. A key deliverable is the Technology Demonstration Unit (TDU). The TDU will assemble the major elements of a notional FPS with a non-nuclear reactor simulator (Rx Sim) and demonstrate system-level performance in thermal vacuum. The Rx Sim includes an electrical resistance heat source and a liquid metal heat transport loop that simulates the reactor thermal interface and expected dynamic response. A power conversion unit (PCU) generates electric power utilizing the liquid metal heat source and rejects waste heat to a heat rejection system (HRS). The HRS includes a pumped water heat removal loop coupled to radiator panels suspended in the thermal-vacuum facility. The basic test plan is to subject the system to realistic operating conditions and gather data to evaluate performance sensitivity, control stability, and response characteristics. Upon completion of the testing, the technology is expected to satisfy the requirements for Technology Readiness Level 6 (System Demonstration in an Operational and Relevant Environment) based on the use of high-fidelity hardware and prototypic software tested under realistic conditions and correlated with analytical predictions.
NASA Astrophysics Data System (ADS)
Ahn, Woo Sang; Park, Sung Ho; Jung, Sang Hoon; Choi, Wonsik; Do Ahn, Seung; Shin, Seong Soo
2014-06-01
The purpose of this study is to determine the radial dose function of HDR 192Ir source based on Monte Carlo simulation using elliptic cylindrical phantom, similar to realistic shape of pelvis, in brachytherapy dosimetric study. The elliptic phantom size and shape was determined by analysis of dimensions of pelvis on CT images of 20 patients treated with brachytherapy for cervical cancer. The radial dose function obtained using the elliptic cylindrical water phantom was compared with radial dose functions for different spherical phantom sizes, including the Williamsion's data loaded into conventional planning system. The differences in the radial dose function for the different spherical water phantoms increase with radial distance, r, and the largest differences in the radial dose function appear for the smallest phantom size. The radial dose function of the elliptic cylindrical phantom significantly decreased with radial distance in the vertical direction due to different scatter condition in comparison with the Williamson's data. Considering doses to ICRU rectum and bladder points, doses to reference points can be underestimated up to 1-2% at the distance from 3 to 6 cm. The radial dose function in this study could be used as realistic data for calculating the brachytherapy dosimetry for cervical cancer.
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model.
Liu, Fang; Velikina, Julia V; Block, Walter F; Kijowski, Richard; Samsonov, Alexey A
2017-02-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexible representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplified treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼ 200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure.
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model
Velikina, Julia V.; Block, Walter F.; Kijowski, Richard; Samsonov, Alexey A.
2017-01-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexibl representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplifie treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure. PMID:28113746
Elliptic generation of composite three-dimensional grids about realistic aircraft
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1986-01-01
An elliptic method for generating composite grids about realistic aircraft is presented. A body-conforming grid is first generated about the entire aircraft by the solution of Poisson's differential equation. This grid has relatively coarse spacing, and it covers the entire physical domain. At boundary surfaces, cell size is controlled and cell skewness is nearly eliminated by inhomogeneous terms, which are found automatically by the program. Certain regions of the grid in which high gradients are expected, and which map into rectangular solids in the computational domain, are then designated for zonal refinement. Spacing in the zonal grids is reduced by adding points with a simple, algebraic scheme. Details of the grid generation method are presented along with results of the present application, a wing-body configuration based on the F-16 fighter aircraft.
Analysis of unregulated emissions from an off-road diesel engine during realistic work operations
NASA Astrophysics Data System (ADS)
Lindgren, Magnus; Arrhenius, Karine; Larsson, Gunnar; Bäfver, Linda; Arvidsson, Hans; Wetterberg, Christian; Hansson, Per-Anders; Rosell, Lars
2011-09-01
Emissions from vehicle diesel engines constitute a considerable share of anthropogenic emissions of pollutants, including many non-regulated compounds such as aromatic hydrocarbons and alkenes. One way to reduce these emissions might be to use fuels with low concentrations of aromatic hydrocarbons, such as Fischer-Tropsch (F-T) diesels. Therefore this study compared Swedish Environmental Class 1 diesel (EC1) with the F-T diesel fuel Ecopar™ in terms of emissions under varied conditions (steady state, controlled transients and realistic work operations) in order to identify factors influencing emissions in actual operation. Using F-T diesel reduced emissions of aromatic hydrocarbons, but not alkenes. Emissions were equally dependent on work operation character (load, engine speed, occurrence of transients) for both fuels. There were indications that the emissions originated from unburnt fuel, rather than from combustion products.
NASA Astrophysics Data System (ADS)
Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko
2007-02-01
In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.
NASA Astrophysics Data System (ADS)
Chatrchyan, S.; Sirunyan, A. M.; Tumasyan, A.; Litomin, A.; Mossolov, V.; Shumeiko, N.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Spilbeeck, A.; Alves, G. A.; Aldá Júnior, W. L.; Hensel, C.; Carvalho, W.; Chinellato, J.; De Oliveira Martins, C.; Matos Figueiredo, D.; Mora Herrera, C.; Nogima, H.; Prado Da Silva, W. L.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Finger, M.; Finger, M., Jr.; Kveton, A.; Tomsa, J.; Adamov, G.; Tsamalaidze, Z.; Behrens, U.; Borras, K.; Campbell, A.; Costanza, F.; Gunnellini, P.; Lobanov, A.; Melzer-Pellmann, I.-A.; Muhl, C.; Roland, B.; Sahin, M.; Saxena, P.; Hegde, V.; Kothekar, K.; Pandey, S.; Sharma, S.; Beri, S. B.; Bhawandeep, B.; Chawla, R.; Kalsi, A.; Kaur, A.; Kaur, M.; Walia, G.; Bhattacharya, S.; Ghosh, S.; Nandan, S.; Purohit, A.; Sharan, M.; Banerjee, S.; Bhattacharya, S.; Chatterjee, S.; Das, P.; Guchait, M.; Jain, S.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Patil, M.; Sarkar, T.; Juodagalvis, A.; Afanasiev, S.; Bunin, P.; Ershov, Y.; Golutvin, I.; Malakhov, A.; Moisenz, P.; Smirnov, V.; Zarubin, A.; Chadeeva, M.; Chistov, R.; Danilov, M.; Popova, E.; Rusinov, V.; Andreev, Yu.; Dermenev, A.; Karneyeu, A.; Krasnikov, N.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Toms, M.; Zhokin, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Kaminskiy, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Terkulov, A.; Bitioukov, S.; Elumakhov, D.; Kalinin, A.; Krychkine, V.; Mandrik, P.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Volkov, A.; Sekmen, S.; Rumerio, P.; Adiguzel, A.; Bakirci, N.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dölek, F.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Işik, C.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Isildak, B.; Karapinar, G.; Murat Guler, A.; Ocalan, K.; Yalvac, M.; Zeyrek, M.; Atakisi, I. O.; Gülmez, E.; Kaya, M.; Kaya, O.; Koseyan, O. K.; Ozcelik, O.; Ozkorucuklu, S.; Tekten, S.; Yetkin, E. A.; Yetkin, T.; Cankocak, K.; Sen, S.; Boyarintsev, A.; Grynyov, B.; Levchuk, L.; Popov, V.; Sorokin, P.; Flacher, H.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Buccilli, A.; Cooper, S. I.; Henderson, C.; West, C.; Arcaro, D.; Gastler, D.; Hazen, E.; Rohlf, J.; Sulak, L.; Wu, S.; Zou, D.; Hakala, J.; Heintz, U.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Mao, Z.; Yu, D. R.; Gary, J. W.; Ghiasi Shirazi, S. M.; Lacroix, F.; Long, O. R.; Wei, H.; Bhandari, R.; Heller, R.; Stuart, D.; Yoo, J. H.; Chen, Y.; Duarte, J.; Lawhorn, J. M.; Nguyen, T.; Spiropulu, M.; Winn, D.; Abdullin, S.; Apresyan, A.; Apyan, A.; Banerjee, S.; Chlebana, F.; Freeman, J.; Green, D.; Hare, D.; Hirschauer, J.; Joshi, U.; Lincoln, D.; Los, S.; Pedro, K.; Spalding, W. J.; Strobbe, N.; Tkaczyk, S.; Whitbeck, A.; Linn, S.; Markowitz, P.; Martinez, G.; Bertoldi, M.; Hagopian, S.; Hagopian, V.; Kolberg, T.; Baarmand, M. M.; Noonan, D.; Roy, T.; Yumiceva, F.; Bilki, B.; Clarida, W.; Debbins, P.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Miller, M.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Schmidt, I.; Snyder, C.; Southwick, D.; Tiras, E.; Yi, K.; Al-bataineh, A.; Bowen, J.; Castle, J.; McBrayer, W.; Murray, M.; Wang, Q.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Baden, A.; Belloni, A.; Calderon, J. D.; Eno, S. C.; Feng, Y. B.; Ferraioli, C.; Grassi, T.; Hadley, N. J.; Jeng, G.-Y.; Kellogg, R. G.; Kunkle, J.; Mignerey, A.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Yang, Z. S.; Yao, Y.; Brandt, S.; D'Alfonso, M.; Hu, M.; Klute, M.; Niu, X.; Chatterjee, R. M.; Evans, A.; Frahm, E.; Kubota, Y.; Lesko, Z.; Mans, J.; Ruckstuhl, N.; Heering, A.; Karmgard, D. J.; Musienko, Y.; Ruchti, R.; Wayne, M.; Benaglia, A. D.; Medvedeva, T.; Mei, K.; Tully, C.; Bodek, A.; de Barbaro, P.; Galanti, M.; Garcia-Bellido, A.; Khukhunaishvili, A.; Lo, K. H.; Vishnevskiy, D.; Zielinski, M.; Agapitos, A.; Amouzegar, M.; Chou, J. P.; Hughes, E.; Saka, H.; Sheffield, D.; Akchurin, N.; Damgov, J.; De Guio, F.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Mengke, T.; Muthumuni, S.; Undleeb, S.; Volobouev, I.; Wang, Z.; Goadhouse, S.; Hirosky, R.; Wang, Y.
2017-12-01
The Phase I upgrade of the CMS Hadron Endcap Calorimeters consists of new photodetectors (Silicon Photomultipliers in place of Hybrid Photo-Diodes) and front-end electronics. The upgrade will eliminate the noise and the calibration drift of the Hybrid Photo-Diodes and enable the mitigation of the radiation damage of the scintillators and the wavelength shifting fibers with a larger spectral acceptance of the Silicon Photomultipliers. The upgrade also includes increased longitudinal segmentation of the calorimeter readout, which allows pile-up mitigation and recalibration due to depth-dependent radiation damage. As a realistic operational test, the responses of the Hadron Endcap Calorimeter wedges were calibrated with a 60Co radioactive source with upgrade electronics. The test successfully established the procedure for future source calibrations of the Hadron Endcap Calorimeters. Here we describe the instrumentation details and the operational experiences related to the sourcing test.
NASA Astrophysics Data System (ADS)
Wein, Stephen; Lauk, Nikolai; Ghobadi, Roohollah; Simon, Christoph
2018-05-01
Highly efficient sources of indistinguishable single photons that can operate at room temperature would be very beneficial for many applications in quantum technology. We show that the implementation of such sources is a realistic goal using solid-state emitters and ultrasmall mode volume cavities. We derive and analyze an expression for photon indistinguishability that accounts for relevant detrimental effects, such as plasmon-induced quenching and pure dephasing. We then provide the general cavity and emitter conditions required to achieve efficient indistinguishable photon emission and also discuss constraints due to phonon sideband emission. Using these conditions, we propose that a nanodiamond negatively charged silicon-vacancy center combined with a plasmonic-Fabry-Pérot hybrid cavity is an excellent candidate system.
The pros and cons of code validation
NASA Technical Reports Server (NTRS)
Bobbitt, Percy J.
1988-01-01
Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.
Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin
2018-02-19
At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.
Applying Open Source Game Engine for Building Visual Simulation Training System of Fire Fighting
NASA Astrophysics Data System (ADS)
Yuan, Diping; Jin, Xuesheng; Zhang, Jin; Han, Dong
There's a growing need for fire departments to adopt a safe and fair method of training to ensure that the firefighting commander is in a position to manage a fire incident. Visual simulation training systems, with their ability to replicate and interact with virtual fire scenarios through the use of computer graphics or VR, become an effective and efficient method for fire ground education. This paper describes the system architecture and functions of a visual simulated training system of fire fighting on oil storage, which adopting Delat3D, a open source game and simulation engine, to provide realistic 3D views. It presents that using open source technology provides not only the commercial-level 3D effects but also a great reduction of cost.
Energy yields in the prebiotic synthesis of hydrogen cyanide and formaldehyde
NASA Technical Reports Server (NTRS)
Stribling, R.; Miller, S. L.
1986-01-01
Prebiotic experiments are usually reported in terms of carbon yields, i.e., the yield of product based on the total carbon in the system. These experiments usually involve a large input of energy and are designed to maximize the yields of product. However, large inputs of energy result in multiple activation of the reactants and products. A more realistic prebiotic experiment is to remove the products of the activation step so they are not exposed a second time to the energy source. This is equivalent to transporting the products synthesized in the primitive atmosphere to the ocean, and thereby protecting them from destruction by atmospheric energy sources. Experiments of this type, using lower inputs of energy, give energy yields (moles of products/joule) which can be used to estimate the relative importance of the different energy sources on the primitive earth. Simulated prebiotic atmospheres containing either CH4, CO or CO2 with N2, H2O and variable amounts of H2 were subjected to a high frequency Tesla coil. Samples of the aqueous phase were taken at various time intervals from 1 hr to 7 days, and the energy yields were obtained by extrapolation to zero time. The samples were analyzed for HCN with the cyanide electrode and for H2CO by chromotropic acid. The spark energy was estimated by calorimetry. The temperature rise in an insulated discharge flask was compared with the temperature rise from a resistance heater in the same flask. These results will be compared with calculated production rates of HCN and H2CO from lightning and a number of photochemical processes on the primitive Earth.
NASA Astrophysics Data System (ADS)
Rice, A. K.; McCray, J. E.; Singha, K.
2016-12-01
The development of directional drilling and stimulation of reservoirs by hydraulic fracturing has transformed the energy landscape in the U.S. by making recovery of hydrocarbons from shale formations not only possible but economically viable. Activities associated with hydraulic fracturing present a set of water-quality challenges, including the potential for impaired groundwater quality. In this project, we use a three-dimensional, multiphase, multicomponent numerical model to investigate hydrogeologic conditions that could lead to groundwater contamination from natural gas wellbore leakage. This work explores the fate of methane that enters a well annulus, possibly from an intermediate formation or from the production zone via a flawed cement seal, and leaves the annulus at one of two depths: at the elevation of groundwater or below a freshwater aquifer. The latter leakage scenario is largely ignored in the current scientific literature, where focus has been on leakage directly into freshwater aquifers, despite modern regulations requiring steel casings and cement sheaths at these depths. We perform a three-stage sensitivity analysis, examining (1) hydrogeologic parameters of media surrounding a methane leakage source zone, (2) geostatistical variations in intrinsic permeability, and (3) methane source zone pressurization. Results indicate that in all cases methane reaches groundwater within the first year of leakage. To our knowledge, this is the first study to consider natural gas wellbore leakage in the context of multiphase flow through heterogeneous permeable media; advantages of multiphase modeling include more realistic analysis of methane vapor-phase relative permeability as compared to single-phase models. These results can be used to inform assessment of aquifer vulnerability to hydrocarbon wellbore leakage at varying depths.
Review on solving the forward problem in EEG source analysis
Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace
2007-01-01
Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144
A model of the 8-25 micron point source infrared sky
NASA Technical Reports Server (NTRS)
Wainscoat, Richard J.; Cohen, Martin; Volk, Kevin; Walker, Helen J.; Schwartz, Deborah E.
1992-01-01
We present a detailed model for the IR point-source sky that comprises geometrically and physically realistic representations of the Galactic disk, bulge, stellar halo, spiral arms (including the 'local arm'), molecular ring, and the extragalactic sky. We represent each of the distinct Galactic components by up to 87 types of Galactic source, each fully characterized by scale heights, space densities, and absolute magnitudes at BVJHK, 12, and 25 microns. The model is guided by a parallel Monte Carlo simulation of the Galaxy at 12 microns. The content of our Galactic source table constitutes a good match to the 12 micron luminosity function in the simulation, as well as to the luminosity functions at V and K. We are able to produce differential and cumulative IR source counts for any bandpass lying fully within the IRAS Low-Resolution Spectrometer's range (7.7-22.7 microns as well as for the IRAS 12 and 25 micron bands. These source counts match the IRAS observations well. The model can be used to predict the character of the point source sky expected for observations from IR space experiments.
Eulerian Simulation of Acoustic Waves Over Long Range in Realistic Environments
NASA Astrophysics Data System (ADS)
Chitta, Subhashini; Steinhoff, John
2015-11-01
In this paper, we describe a new method for computation of long-range acoustics. The approach is a hybrid of near and far-field methods, and is unique in its Eulerian treatment of the far-field propagation. The near-field generated by any existing method to project an acoustic solution onto a spherical surface that surrounds a source. The acoustic field on this source surface is then extended to an arbitrarily large distance in an inhomogeneous far-field. This would normally require an Eulerian solution of the wave equation. However, conventional Eulerian methods have prohibitive grid requirements. This problem is overcome by using a new method, ``Wave Confinement'' (WC) that propagates wave-identifying phase fronts as nonlinear solitary waves that live on grid indefinitely. This involves modification of wave equation by the addition of a nonlinear term without changing the basic conservation properties of the equation. These solitary waves can then be used to ``carry'' the essential integrals of the acoustic wave. For example, arrival time, centroid position and other properties that are invariant as the wave passes a grid point. Because of this property the grid can be made as coarse as necessary, consistent with overall accuracy to resolve atmospheric/ground variations. This work is being funded by the U.S. Army under a Small Business Innovation Research (SBIR) program (contract number: # W911W6-12-C-0036). The authors would like to thank Dr. Frank Caradonna and Dr. Ben W. Sim for this support.
Technology commercialization cost model and component case study
NASA Astrophysics Data System (ADS)
1991-12-01
Fuel cells seem poised to emerge as a clean, efficient, and cost competitive source of fossil fuel based electric power and thermal energy. Sponsors of fuel cell technology development need to determine the validity and the attractiveness of a technology to the market in terms of meeting requirements and providing value which exceeds the total cost of ownership. Sponsors of fuel cell development have addressed this issue by requiring the developers to prepare projections of the future production cost of their fuel cells in commercial quantities. These projected costs, together with performance and life projections, provide a preliminary measure of the total value and cost of the product to the customer. Booz-Allen & Hamilton Inc. and Michael A. Cobb & Company have been retained in several assignments over the years to audit these cost projections. The audits have gone well beyond a simple review of the numbers. They have probed the underlying technical and financial assumptions, the sources of data on material and equipment costs, and explored issues such as the realistic manufacturing yields which can be expected in various processes. Based on the experience gained from these audits, DOE gave Booz-Allen and Michael A. Cobb & company the task to develop a criteria to be used in the execution of future fuel cell manufacturing cost studies. It was thought that such a criteria would make it easier to execute such studies in the future as well as to cause such studies to be more understandable and comparable.
Synoptic, Global Mhd Model For The Solar Corona
NASA Astrophysics Data System (ADS)
Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.
2007-05-01
The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.
NASA Astrophysics Data System (ADS)
Bochner, Brett
The LIGO project is part of a world-wide effort to detect the influx of Gravitational Waves upon the earth from astrophysical sources, via their interaction with laser beams in interferometric detectors that are designed for extraordinarily high sensitivity. Central to the successful performance of LIGO detectors is the quality of their optical components, and the efficient optimization of interferometer configuration parameters. To predict LIGO performance with optics possessing realistic imperfections, we have developed a numerical simulation program to compute the steady-state electric fields of a complete, coupled-cavity LIGO interferometer. The program can model a wide variety of deformations, including laser beam mismatch and/or misalignment, finite mirror size, mirror tilts, curvature distortions, mirror surface roughness, and substrate inhomogeneities. Important interferometer parameters are automatically optimized during program execution to achieve the best possible sensitivity for each new set of perturbed mirrors. This thesis includes investigations of two interferometer designs: the initial LIGO system, and an advanced LIGO configuration called Dual Recycling. For Initial-LIGO simulations, the program models carrier and sideband frequency beams to compute the explicit shot-noise-limited gravitational wave sensitivity of the interferometer. It is demonstrated that optics of exceptional quality (root-mean-square deformations of less than ~1 nm in the central mirror regions) are necessary to meet Initial-LIGO performance requirements, but that they can be feasibly met. It is also shown that improvements in mirror quality can substantially increase LIGO's sensitivity to selected astrophysical sources. For Dual Recycling, the program models gravitational- wave-induced sidebands over a range of frequencies to demonstrate that the tuned and narrow-banded signal responses predicted for this configuration can be achieved with imperfect optics. Dual Recycling has lower losses at the interferometer signal port than the Initial-LIGO system, though not significantly improved tolerance to mirror roughness deformations in terms of maintaining high signals. Finally, it is shown that 'Wavefront Healing', the claim that losses can be re- injected into the system to feed the gravitational wave signals, is successful in theory, but limited in practice for optics which cause large scattering losses. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253- 1690.)
NASA Astrophysics Data System (ADS)
Bochner, Brett
1998-12-01
The LIGO project is part of a world-wide effort to detect the influx of Gravitational Waves upon the earth from astrophysical sources, via their interaction with laser beams in interferometric detectors that are designed for extraordinarily high sensitivity. Central to the successful performance of LIGO detectors is the quality of their optical components, and the efficient optimization of interferometer configuration parameters. To predict LIGO performance with optics possessing realistic imperfections, we have developed a numerical simulation program to compute the steady-state electric fields of a complete, coupled-cavity LIGO interferometer. The program can model a wide variety of deformations, including laser beam mismatch and/or misalignment, finite mirror size, mirror tilts, curvature distortions, mirror surface roughness, and substrate inhomogeneities. Important interferometer parameters are automatically optimized during program execution to achieve the best possible sensitivity for each new set of perturbed mirrors. This thesis includes investigations of two interferometer designs: the initial LIGO system, and an advanced LIGO configuration called Dual Recycling. For Initial-LIGO simulations, the program models carrier and sideband frequency beams to compute the explicit shot-noise-limited gravitational wave sensitivity of the interferometer. It is demonstrated that optics of exceptional quality (root-mean-square deformations of less than ~1 nm in the central mirror regions) are necessary to meet Initial-LIGO performance requirements, but that they can be feasibly met. It is also shown that improvements in mirror quality can substantially increase LIGO's sensitivity to selected astrophysical sources. For Dual Recycling, the program models gravitational- wave-induced sidebands over a range of frequencies to demonstrate that the tuned and narrow-banded signal responses predicted for this configuration can be achieved with imperfect optics. Dual Recycling has lower losses at the interferometer signal port than the Initial-LIGO system, though not significantly improved tolerance to mirror roughness deformations in terms of maintaining high signals. Finally, it is shown that 'Wavefront Healing', the claim that losses can be re- injected into the system to feed the gravitational wave signals, is successful in theory, but limited in practice for optics which cause large scattering losses. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253- 1690.)
Computer Modeling of High-Intensity Cs-Sputter Ion Sources
NASA Astrophysics Data System (ADS)
Brown, T. A.; Roberts, M. L.; Southon, J. R.
The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.
Gebhart, T. E.; Martinez-Rodriguez, R. A.; Baylor, L. R.; ...
2017-08-11
To produce a realistic tokamak-like plasma environment in linear plasma device, a transient source is needed to deliver heat and particle fluxes similar to those seen in an edge localized mode (ELM). ELMs in future large tokamaks will deliver heat fluxes of ~1 GW/m 2 to the divertor plasma facing components at a few Hz. An electrothermal plasma source can deliver heat fluxes of this magnitude. These sources operate in an ablative arc regime which is driven by a DC capacitive discharge. An electrothermal source was configured in this paper with two pulse lengths and tested under a solenoidal magneticmore » field to determine the resulting impact on liner ablation, plasma parameters, and delivered heat flux. The arc travels through and ablates a boron nitride liner and strikes a tungsten plate. Finally, the tungsten target plate is analyzed for surface damage using a scanning electron microscope.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gebhart, T. E.; Martinez-Rodriguez, R. A.; Baylor, L. R.
To produce a realistic tokamak-like plasma environment in linear plasma device, a transient source is needed to deliver heat and particle fluxes similar to those seen in an edge localized mode (ELM). ELMs in future large tokamaks will deliver heat fluxes of ~1 GW/m 2 to the divertor plasma facing components at a few Hz. An electrothermal plasma source can deliver heat fluxes of this magnitude. These sources operate in an ablative arc regime which is driven by a DC capacitive discharge. An electrothermal source was configured in this paper with two pulse lengths and tested under a solenoidal magneticmore » field to determine the resulting impact on liner ablation, plasma parameters, and delivered heat flux. The arc travels through and ablates a boron nitride liner and strikes a tungsten plate. Finally, the tungsten target plate is analyzed for surface damage using a scanning electron microscope.« less
Statistical mechanics of neocortical interactions. Derivation of short-term-memory capacity
NASA Astrophysics Data System (ADS)
Ingber, Lester
1984-06-01
A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. The number of possible minima, their time scales of hysteresis and probable reverberations, and their nearest-neighbor columnar interactions are all consistent with well-established empirical rules of human short-term memory. Thus, aspects of conscious experience are derived from neuronal firing patterns, using modern methods of nonlinear nonequilibrium statistical mechanics to develop realistic explicit synaptic interactions.
Accurate determinations of alpha(s) from realistic lattice QCD.
Mason, Q; Trottier, H D; Davies, C T H; Foley, K; Gray, A; Lepage, G P; Nobes, M; Shigemitsu, J
2005-07-29
We obtain a new value for the QCD coupling constant by combining lattice QCD simulations with experimental data for hadron masses. Our lattice analysis is the first to (1) include vacuum polarization effects from all three light-quark flavors (using MILC configurations), (2) include third-order terms in perturbation theory, (3) systematically estimate fourth and higher-order terms, (4) use an unambiguous lattice spacing, and (5) use an [symbol: see text](a2)-accurate QCD action. We use 28 different (but related) short-distance quantities to obtain alpha((5)/(MS))(M(Z)) = 0.1170(12).
Hut-Mossel, Lisanne; Welker, Gera; Ahaus, Kees; Gans, Rijk
2017-06-14
Many types of audits are commonly used in hospital care to promote quality improvements. However, the evidence on the effectiveness of audits is mixed. The objectives of this proposed realist review are (1) to understand how and why audits might, or might not, work in terms of delivering the intended outcome of improved quality of hospital care and (2) to examine under what circumstances audits could potentially be effective. This protocol will provide the rationale for using a realist review approach and outline the method. This review will be conducted using an iterative four-stage approach. The first and second steps have already been executed. The first step was to develop an initial programme theory based on the literature that explains how audits are supposed to work. Second, a systematic literature search was conducted using relevant databases. Third, data will be extracted and coded for concepts relating to context, outcomes and their interrelatedness. Finally, the data will be synthesised in a five-step process: (1) organising the extracted data into evidence tables, (2) theming, (3) formulating chains of inference from the identified themes, (4) linking the chains of inference and formulating CMO configurations and (5) refining the initial programme theory. The reporting of the review will follow the 'Realist and Meta-Review Evidence Synthesis: Evolving Standards' (RAMESES) publication standards. This review does not require formal ethical approval. A better understanding of how and why these audits work, and how context impacts their effectiveness, will inform stakeholders in deciding how to tailor and implement audits within their local context. We will use a range of dissemination strategies to ensure that findings from this realist review are broadly disseminated to academic and non-academic audiences. CRD42016039882. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Bayesian reconstruction of transmission within outbreaks using genomic variants.
De Maio, Nicola; Worby, Colin J; Wilson, Daniel J; Stoesser, Nicole
2018-04-01
Pathogen genome sequencing can reveal details of transmission histories and is a powerful tool in the fight against infectious disease. In particular, within-host pathogen genomic variants identified through heterozygous nucleotide base calls are a potential source of information to identify linked cases and infer direction and time of transmission. However, using such data effectively to model disease transmission presents a number of challenges, including differentiating genuine variants from those observed due to sequencing error, as well as the specification of a realistic model for within-host pathogen population dynamics. Here we propose a new Bayesian approach to transmission inference, BadTrIP (BAyesian epiDemiological TRansmission Inference from Polymorphisms), that explicitly models evolution of pathogen populations in an outbreak, transmission (including transmission bottlenecks), and sequencing error. BadTrIP enables the inference of host-to-host transmission from pathogen sequencing data and epidemiological data. By assuming that genomic variants are unlinked, our method does not require the computationally intensive and unreliable reconstruction of individual haplotypes. Using simulations we show that BadTrIP is robust in most scenarios and can accurately infer transmission events by efficiently combining information from genetic and epidemiological sources; thanks to its realistic model of pathogen evolution and the inclusion of epidemiological data, BadTrIP is also more accurate than existing approaches. BadTrIP is distributed as an open source package (https://bitbucket.org/nicofmay/badtrip) for the phylogenetic software BEAST2. We apply our method to reconstruct transmission history at the early stages of the 2014 Ebola outbreak, showcasing the power of within-host genomic variants to reconstruct transmission events.
Could Expanded Freight Rail Reduce Air Pollution from Trucks?
NASA Astrophysics Data System (ADS)
Bickford, E. E.; Holloway, T.; Johnston, M.
2010-12-01
Cars, trucks and trains are a significant source of emissions that impact both climate and air quality on regional to global scales. Diesel vehicles, most used for freight transport, account for 42% of on-road nitrogen oxide emissions, 58% of on-road fine particulate emissions, and 21% of on-road carbon dioxide emissions. With freight tonnage projected to increase 28% by 2018, and freight trucks the fastest growing source of transportation emissions, we evaluate the potential for increased rail capacity to reduce the environmental impacts of trucks. Most widely available mobile source emissions inventories contain insufficient spatial detail to quantify realistic emission scenario options, and none to date have been linked with commodity flow information in a manner appropriate to consider the true potential of rail substitution. To support a truck-to-rail analysis, and other policy assessments requiring roadway-by-roadway analysis, we have developed a freight emissions inventory for the Upper Midwest based on the Federal Highway Administration’s Freight Analysis Framework version 2.2 and the Environmental Protection Agency’s on-road emissions model, Mobile6.2. Using a Geographical Information System (GIS), we developed emissions scenarios for truck-to-rail modal shifts where 95% of freight tonnage on trips longer than 400 miles is shifted off of trucks and onto railways. Scenarios will be analyzed with the Community Multiscale Air Quality (CMAQ) regional model to assess air quality impacts of associated changes. By using well-respected transportation data and realistic assumptions, results from this study have the potential to inform decisions on transportation sustainability, carbon management, public health, and air quality.
Walonoski, Jason; Kramer, Mark; Nichols, Joseph; Quina, Andre; Moesel, Chris; Hall, Dylan; Duffett, Carlton; Dube, Kudakwashe; Gallagher, Thomas; McLachlan, Scott
2017-08-30
Our objective is to create a source of synthetic electronic health records that is readily available; suited to industrial, innovation, research, and educational uses; and free of legal, privacy, security, and intellectual property restrictions. We developed Synthea, an open-source software package that simulates the lifespans of synthetic patients, modeling the 10 most frequent reasons for primary care encounters and the 10 chronic conditions with the highest morbidity in the United States. Synthea adheres to a previously developed conceptual framework, scales via open-source deployment on the Internet, and may be extended with additional disease and treatment modules developed by its user community. One million synthetic patient records are now freely available online, encoded in standard formats (eg, Health Level-7 [HL7] Fast Healthcare Interoperability Resources [FHIR] and Consolidated-Clinical Document Architecture), and accessible through an HL7 FHIR application program interface. Health care lags other industries in information technology, data exchange, and interoperability. The lack of freely distributable health records has long hindered innovation in health care. Approaches and tools are available to inexpensively generate synthetic health records at scale without accidental disclosure risk, lowering current barriers to entry for promising early-stage developments. By engaging a growing community of users, the synthetic data generated will become increasingly comprehensive, detailed, and realistic over time. Synthetic patients can be simulated with models of disease progression and corresponding standards of care to produce risk-free realistic synthetic health care records at scale. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
Schaumberg, A
2015-04-01
Simulation often relies on a case-based learning approach and is used as a teaching tool for a variety of audiences. The knowledge transfer goes beyond the mere exchange of soft skills and practical abilities and also includes practical knowledge and decision-making behavior; however, verification of knowledge or practical skills seldom unfolds during simulations. Simulation-based learning seems to affect many learning domains and can, therefore, be considered to be multifactorial in nature. At present, studies examining the effects of learning environments with varying levels of reality on the cognitive long-term retention of students are lacking. The present study focused on the question whether case scenarios with varying levels of reality produce differences in the cognitive long-term retention of students, in particular with regard to the learning dimensions knowledge, understanding and transfer. The study was conducted on 153 students in the first clinical semester at the Justus-Liebig University of Giessen. Students were randomly selected and subsequently assigned, also in a random fashion, to two practice groups, i.e. realistic and unrealistic. In both groups the students were presented with standardized case scenarios consisting of three case studies, which were accurately defined with a case report containing a detailed description of each scenario and all relevant values so as to ensure identical conditions for both groups. The unrealistic group sat in an unfurnished practice room as a learning environment. The realistic group sat in a furnished learning environment with various background pictures and ambient noise. Students received examination questions before, immediately following and 14 days after the practice. Examination questions were identical at each of the three time points, classified into three learning dimensions following Bloom's taxonomy and evaluated. Furthermore, examination questions were supplemented by a questionnaire concerning the individual perception of reality and own learning success, to be filled in by students immediately after the practice. Examination questions and questionnaires were anonymous but associated with each other. Even with less experienced participants, realistic simulation design led to a significant increase of knowledge immediately after the end of the simulation. This effect, however, did not impact the cognitive long-term retention of students. While the realistic group showed a higher initial knowledge after the simulation, this "knowledge delta" was forgotten within 14 days, putting them back on par with the unrealistic comparison group. It could be significantly demonstrated that 2 weeks after the practice, comprehension questions were answered better than those on pure knowledge. Therefore, it can be concluded that even vaguely realistic simulation scenarios affect the learning dimension of understanding. For simulation-based learning the outcome depends not only on knowledge, practical skills and motivational variables but also on the onset of negative emotions, perception of own ability and personality profile. Simulation training alone does not appear to guarantee learning success but it seems to be necessary to establish a simulation setting suitable for the education level, needs and personality characteristics of the students.
ERIC Educational Resources Information Center
Khudeir, Dua'a Ibrahim
2017-01-01
This research paper discusses woman status in the country of Jordan in terms of rights, equality and personal liberties, freedom of choice in particular. It argues that, although Jordan is working hard to be open to Western values and civilization; however, it lags behind when it comes to woman liberty and equality. Jordan is a patriarchal…
Fourth International Congress on Industrial and Applied Mathematics. Book of Abstracts
1999-01-01
Dipartimento di Matematica , Universita’ di Pavia, Italy) Logarithmic Sobolev inequalities for kinetic semiconductor equations In this paper we analyze the...terms of Whitney forms. FERNANDES, Paolo (Istituto per la Matematica Applicata del Consiglio Nazionale delle Ricerche, Italy) Dealing with realistic... Matematica dell Universita di Pavia, Italy. PERUGIA, Ilaria (Diaprtimento di Matematica , Universita’ di Pavia - Italy) An adaptive field-based method
NASA Technical Reports Server (NTRS)
Beach, B. E.
1980-01-01
Some of the concepts related to a line-oriented flight training program are discussed. The need to shift from training in manipulative skills to something closer to management skills is emphasized. The program is evaluated in terms of its realistic approaches which include the simulator's optimized motion and visual capabilities. The value of standard operating procedures as they affect the line pilot in everyday operations are also illustrated.
Automated Synthetic Scene Generation
2014-07-01
Using the Beard-Maxwell BRDF model , the BRDF from Equations (3.3) and (3.4) is composed of specular, diffuse, and volumetric terms such that x y zSun... models help organizations developing new remote sensing instruments anticipate sensor performance by enabling the ability to create synthetic imagery...for proposed sensor before a sensor is built. One of the largest challenges in modeling realistic synthetic imagery, however, is generating the
Further Examination of the Immediate Impact of Television on Children's Executive Function
ERIC Educational Resources Information Center
Lillard, Angeline S.; Drell, Marissa B.; Richey, Eve M.; Boguszewski, Katherine; Smith, Eric D.
2015-01-01
Three studies examined the short-term impact of television (TV) on children's executive function (EF). Study 1 (N = 160) showed that 4- and 6-year-olds' EF is impaired after watching 2 different fast and fantastical shows, relative to that of children who watched a slow, realistic show or played. In Study 2 (N = 60), 4-year-olds' EF was as…
1993-11-01
way is to develop a crude but working model of an entire system. The other is by developing a realistic model of the user interface , leaving out most...devices or by incorporating software for a more user -friendly interface . Automation introduces the possibility of making data entry errors. Multimode...across various human- computer interfaces . 127 a Memory: Minimize the amount of information that the user must maintain in short-term memory
Estimation of Atlantic-Mediterranean netflow variability
NASA Astrophysics Data System (ADS)
Guerreiro, Catarina; Peliz, Alvaro; Miranda, Pedro
2016-04-01
The exchanges at the Strait of Gibraltar are extremely difficult to measure due to the strong temporal and across-strait variabilities; yet the Atlantic inflow into the Mediterranean is extremely important both for climate and to ecosystems. Most of the published numerical modeling studies do not resolve the Strait of Gibraltar realistically. Models that represent the strait at high resolution focus primarily in high frequency dynamics, whereas long-term dynamics are studied in low resolution model studies, and for that reason the Strait dynamics are poorly resolved. Estimating the variability of the exchanges requires long term and high-resolutions studies, thus an improved simulation with explicit and realistic representation of the Strait is necessary. On seasonal to inter-annual timescales the flow is essentially driven by the net evaporation contribution and consequently realistic fields of precipitation and evaporation are necessary for model setup. A comparison between observations, reanalysis and combined products shows ERA-Interim Reanalysis has the most suitable product for Mediterranean Sea. Its time and space variability are in close agreement with NOC 1.1 for the common period (1980 - 1993) and also with evaporation from OAFLUX (1989 - 2014). Subinertial fluctuations, periods from days to a few months, are the second most energetic, after tides, and are the response to atmospheric pressure fluctuations and local winds. Atmospheric pressure fluctuations in the Mediterranean cause sea level oscillations that induce a barotropic flow through the Strait. Candela's analytical model has been used to quantify this response in later studies, though comparison with observations points to an underestimation of the flow at strait. An improved representation of this term contribution to the Atlantic - Mediterranean exchange must be achieved on longer time-scales. We propose a new simulation for the last 36 years (1979 - 2014) for the Mediterranean - Atlantic domain with explicit representation of the Strait. The simulations are performed using the Regional Ocean Modeling System (ROMS) and forced with the different contributions of the freshwater budget, sea level pressure fluctuations and winds from ERA-Interim Reanalysis. The model of sea level pressure induced barotropic fluctuations simulates the barotropic variability at the Strait of Gibraltar for the last decades.
Effect of the Ionosphere on Space and Terrestrial Systems
1978-01-01
adequately shielded and filtered, Voyager spacecraft was modified to include arc that the grounding of all conductive elements discharge sources...dependence. Reasons for such a a set of the associated "cutoff orbits ". We choice include the following: A realistic see from Fig. 8 that the included angle...which had been modified to produce an approxima- chronous- orbit spacecraft [De Forest, 1972; tely uniform flood beam up to 10cm in diameter
ERIC Educational Resources Information Center
Croghan, Emma; Aveyard, Paul; Johnson, Carol
2005-01-01
Purpose: There is a discrepancy between the ease of purchase of cigarettes reported by young people themselves and the results of ease of purchase obtained by tests done by official sources such as Trading Standards Units. This discrepancy suggests that either data from young people or from trading standards are unreliable. This research set out…
An Investigation of the Influence of Waves on Sediment Processes in Skagit Bay
2012-09-30
parameterizations common to most surface wave models, including wave generation by wind , energy dissipation from whitecapping, and quadruplet wave-wave...supply and wind on tidal flat sediment transport. It will be used to evaluate the capabilities of state-of-the-art open source sediment models and to...N00014-08-1-1115 which supported the hydrodynamic model development. Wind forcing for the wave and hydrodynamic models for realistic experiments will
Ohly, Heather; Crossland, Nicola; Dykes, Fiona; Lowe, Nicola; Hall-Moran, Victoria
2017-04-21
To explore how low-income pregnant women use Healthy Start food vouchers, the potential impacts of the programme, and which women might experience these impacts and why. A realist review. Primary or empirical studies (of any design) were included if they contributed relevant evidence or insights about how low-income women use food vouchers from the Healthy Start (UK) or the Special Supplemental Nutrition Program for Women, Infants and Children (WIC) programmes. The assessment of 'relevance' was deliberately broad to ensure that reviewers remained open to new ideas from a variety of sources of evidence. A combination of evidence synthesis and realist analysis techniques was used to modify, refine and substantiate programme theories, which were constructed as explanatory 'context-mechanism-outcome'-configurations. 38 primary studies were included in this review: four studies on Healthy Start and 34 studies on WIC. Two main outcome strands were identified: dietary improvements (intended) and financial assistance (unintended). Three evidence-informed programme theories were proposed to explain how aspects of context (and mechanisms) may generate these outcomes: the 'relative value' of healthy eating (prioritisation of resources); retailer discretion (pressure to 'bend the rules'); the influence of other family members (disempowerment). This realist review suggests that some low-income pregnant women may use Healthy Start vouchers to increase their consumption of fruits and vegetables and plain cow's milk, whereas others may use them to reduce food expenditure and save money for other things. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Re-Evaluation of Development of the TMDL Using Long-Term Monitoring Data and Modeling
NASA Astrophysics Data System (ADS)
Squires, A.; Rittenburg, R.; Boll, J.; Brooks, E. S.
2012-12-01
Since 1996, 47,979 Total Maximum Daily Loads (TMDLs) have been approved throughout the United States for impaired water bodies. TMDLs are set through the determination of natural background loads for a given water body which then estimate contributions from point and nonpoint sources to create load allocations and determine acceptable pollutant levels to meet water quality standards. Monitoring data and hydrologic models may be used in this process. However, data sets used are often limited in duration and frequency, and model simulations are not always accurate. The objective of this study is to retrospectively look at the development and accuracy of the TMDL for a stream in an agricultural area using long-term monitoring data and a robust modeling process. The study area is the Paradise Creek Watershed in northern Idaho. A sediment TMDL was determined for the Idaho section of Paradise Creek in 1997. Sediment TMDL levels were determined using a short-term data set and the Water Erosion Prediction Project (WEPP) model. Background loads used for the TMDL in 1997 were from pre-agricultural levels, based on WEPP model results. We modified the WEPP model for simulation of saturation excess overland flow, the dominant runoff generation mechanism, and analyzed more than 10 years of high resolution monitoring data from 2001 - 2012, including discharge and total suspended solids. Results will compare background loading and current loading based on present-day land use documented during the monitoring period and compare previous WEPP model results with the modified WEPP model results. This research presents a reevaluation of the TMDL process with recommendations for a more scientifically sound methodology to attain realistic water quality goals.
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.
2016-12-01
Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.
Spectral Characteristics of Wake Vortex Sound During Roll-Up
NASA Technical Reports Server (NTRS)
Booth, Earl R., Jr. (Technical Monitor); Zhang, Yan; Wang, Frank Y.; Hardin, Jay C.
2003-01-01
This report presents an analysis of the sound spectra generated by a trailing aircraft vortex during its rolling-up process. The study demonstrates that a rolling-up vortex could produce low frequency (less than 100 Hz) sound with very high intensity (60 dB above threshold of human hearing) at a distance of 200 ft from the vortex core. The spectrum then drops o rapidly thereafter. A rigorous analytical approach has been adopted in this report to derive the spectrum of vortex sound. First, the sound pressure was solved from an alternative treatment of the Lighthill s acoustic analogy approach [1]. After the application of Green s function for free space, a tensor analysis was applied to permit the removal of the source term singularity of the wave equation in the far field. Consequently, the sound pressure is expressed in terms of the retarded time that indicates the time history and spacial distribution of the sound source. The Fourier transformation is then applied to the sound pressure to compute its spectrum. As a result, the Fourier transformation greatly simplifies the expression of the vortex sound pressure involving the retarded time, so that the numerical computation is applicable with ease for axisymmetric line vortices during the rolling-up process. The vortex model assumes that the vortex circulation is proportional to the time and the core radius is a constant. In addition, the velocity profile is assumed to be self-similar along the aircraft flight path, so that a benchmark vortex velocity profile can be devised to obtain a closed form solution, which is then used to validate the numerical calculations for other more realistic vortex profiles for which no closed form solutions are available. The study suggests that acoustic sensors operating at low frequency band could be profitably deployed for detecting the vortex sound during the rolling-up process.
Crowd Sourcing Approach for UAS Communication Resource Demand Forecasting
NASA Technical Reports Server (NTRS)
Wargo, Chris A.; Difelici, John; Roy, Aloke; Glaneuski, Jason; Kerczewski, Robert J.
2016-01-01
Congressional attention to Unmanned Aircraft Systems (UAS) has caused the Federal Aviation Administration (FAA) to move the National Airspace System (NAS) Integration project forward, but using guidelines, practices and procedures that are yet to be fully integrated with the FAA Aviation Management System. The real drive for change in the NAS will to come from both UAS operators and the government jointly seeing an accurate forecast of UAS usage demand data. This solid forecast information would truly get the attention of planners. This requires not an aggregate demand, but rather a picture of how the demand is spread across small to large UAS, how it is spread across a wide range of missions, how it is expected over time and where, in terms of geospatial locations, will the demand appear. In 2012 the Volpe Center performed a study of the overall future demand for UAS. This was done by aggregate classes of aircraft types. However, the realistic expected demand will appear in clusters of aircraft activities grouped by similar missions on a smaller geographical footprint and then growing from those small cells. In general, there is not a demand forecast that is tightly coupled to the real purpose of the mission requirements (e.g. in terms of real locations and physical structures such as wind mills to inspect, farms to survey, pipelines to patrol, etc.). Being able to present a solid basis for the demand is crucial to getting the attention of investment, government and other fiscal planners. To this end, Mosaic ATM under NASA guidance is developing a crowd sourced, demand forecast engine that can draw forecast details from commercial and government users and vendors. These forecasts will be vetted by a governance panel and then provide for a sharable accurate set of projection data. Our paper describes the project and the technical approach we are using to design and create access for users to the forecast system.
NASA Astrophysics Data System (ADS)
Mori, K.; Tada, K.; Tawara, Y.; Tosaka, H.; Ohno, K.; Asami, M.; Kosaka, K.
2015-12-01
Since the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident, intensive monitoring and modeling works on radionuclide transfer in environment have been carried out. Although Cesium (Cs) concentration has been attenuating due to both physical and environmental half-life (i.e., wash-off by water and sediment), the attenuation rate depends clearly on the type of land use and land cover. In the Fukushima case, studying the migration in forest land use is important for predicting the long-term behavior of Cs because most of the contaminated region is covered by forests. Atmospheric fallout is characterized by complicated behavior in biogeochemical cycle in forests which can be described by biotic/abiotic interactions between many components. In developing conceptual and mathematical model on Cs transfer in forest ecosystem, defining the dominant components and their interactions are crucial issues (BIOMASS, 1997-2001). However, the modeling of fate and transport in geosphere after Cs exports from the forest ecosystem is often ignored. An integrated watershed modeling for simulating spatiotemporal redistribution of Cs that includes the entire region from source to mouth and surface to subsurface, has been recently developed. Since the deposited Cs can migrate due to water and sediment movement, the different species (i.e., dissolved and suspended) and their interactions are key issues in the modeling. However, the initial inventory as source-term was simplified to be homogeneous and time-independent, and biogeochemical cycle in forests was not explicitly considered. Consequently, it was difficult to evaluate the regionally-inherent characteristics which differ according to land uses, even if the model was well calibrated. In this study, we combine the different advantages in modeling of forest ecosystem and watershed. This enable to include more realistic Cs deposition and time series of inventory can be forced over the land surface. These processes are integrated into the watershed simulator GETFLOWS coupled with biogeochemical cycling in forests. We present brief a overview of the simulator and an application for reservoir basin.
Breil, Cassandra; Abert Vian, Maryline; Zemb, Thomas; Kunz, Werner; Chemat, Farid
2017-03-27
Bligh and Dyer (B & D) or Folch procedures for the extraction and separation of lipids from microorganisms and biological tissues using chloroform/methanol/water have been used tens of thousands of times and are "gold standards" for the analysis of extracted lipids. Based on the Conductor-like Screening MOdel for realistic Solvatation (COSMO-RS), we select ethanol and ethyl acetate as being potentially suitable for the substitution of methanol and chloroform. We confirm this by performing solid-liquid extraction of yeast ( Yarrowia lipolytica IFP29 ) and subsequent liquid-liquid partition-the two steps of routine extraction. For this purpose, we consider similar points in the ternary phase diagrams of water/methanol/chloroform and water/ethanol/ethyl acetate, both in the monophasic mixtures and in the liquid-liquid miscibility gap. Based on high performance thin-layer chromatography (HPTLC) to obtain the distribution of lipids classes, and gas chromatography coupled with a flame ionisation detector (GC/FID) to obtain fatty acid profiles, this greener solvents pair is found to be almost as effective as the classic methanol-chloroform couple in terms of efficiency and selectivity of lipids and non-lipid material. Moreover, using these bio-sourced solvents as an alternative system is shown to be as effective as the classical system in terms of the yield of lipids extracted from microorganism tissues, independently of their apparent hydrophilicity.
Breil, Cassandra; Abert Vian, Maryline; Zemb, Thomas; Kunz, Werner; Chemat, Farid
2017-01-01
Bligh and Dyer (B & D) or Folch procedures for the extraction and separation of lipids from microorganisms and biological tissues using chloroform/methanol/water have been used tens of thousands of times and are “gold standards” for the analysis of extracted lipids. Based on the Conductor-like Screening MOdel for realistic Solvatation (COSMO-RS), we select ethanol and ethyl acetate as being potentially suitable for the substitution of methanol and chloroform. We confirm this by performing solid–liquid extraction of yeast (Yarrowia lipolytica IFP29) and subsequent liquid–liquid partition—the two steps of routine extraction. For this purpose, we consider similar points in the ternary phase diagrams of water/methanol/chloroform and water/ethanol/ethyl acetate, both in the monophasic mixtures and in the liquid–liquid miscibility gap. Based on high performance thin-layer chromatography (HPTLC) to obtain the distribution of lipids classes, and gas chromatography coupled with a flame ionisation detector (GC/FID) to obtain fatty acid profiles, this greener solvents pair is found to be almost as effective as the classic methanol–chloroform couple in terms of efficiency and selectivity of lipids and non-lipid material. Moreover, using these bio-sourced solvents as an alternative system is shown to be as effective as the classical system in terms of the yield of lipids extracted from microorganism tissues, independently of their apparent hydrophilicity. PMID:28346372
A discrete fracture model for two-phase flow in fractured porous media
NASA Astrophysics Data System (ADS)
Gläser, Dennis; Helmig, Rainer; Flemisch, Bernd; Class, Holger
2017-12-01
A discrete fracture model on the basis of a cell-centered finite volume scheme with multi-point flux approximation (MPFA) is presented. The fractures are included in a d-dimensional computational domain as (d - 1)-dimensional entities living on the element facets, which requires the grid to have the element facets aligned with the fracture geometries. However, the approach overcomes the problem of small cells inside the fractures when compared to equi-dimensional models. The system of equations considered is solved on both the matrix and the fracture domain, where on the prior the fractures are treated as interior boundaries and on the latter the exchange term between fracture and matrix appears as an additional source/sink. This exchange term is represented by the matrix-fracture fluxes, computed as functions of the unknowns in both domains by applying adequate modifications to the MPFA scheme. The method is applicable to both low-permeable as well as highly conductive fractures. The quality of the results obtained by the discrete fracture model is studied by comparison to an equi-dimensional discretization on a simple geometry for both single- and two-phase flow. For the case of two-phase flow in a highly conductive fracture, good agreement in the solution and in the matrix-fracture transfer fluxes could be observed, while for a low-permeable fracture the discrepancies were more pronounced. The method is then applied two-phase flow through a realistic fracture network in two and three dimensions.
NASA Astrophysics Data System (ADS)
Sus, Oliver; Stengel, Martin; Stapelberg, Stefan; McGarragh, Gregory; Poulsen, Caroline; Povey, Adam C.; Schlundt, Cornelia; Thomas, Gareth; Christensen, Matthew; Proud, Simon; Jerg, Matthias; Grainger, Roy; Hollmann, Rainer
2018-06-01
We present here the key features of the Community Cloud retrieval for CLimate (CC4CL) processing algorithm. We focus on the novel features of the framework: the optimal estimation approach in general, explicit uncertainty quantification through rigorous propagation of all known error sources into the final product, and the consistency of our long-term, multi-platform time series provided at various resolutions, from 0.5 to 0.02°. By describing all key input data and processing steps, we aim to inform the user about important features of this new retrieval framework and its potential applicability to climate studies. We provide an overview of the retrieved and derived output variables. These are analysed for four, partly very challenging, scenes collocated with CALIOP (Cloud-Aerosol lidar with Orthogonal Polarization) observations in the high latitudes and over the Gulf of Guinea-West Africa. The results show that CC4CL provides very realistic estimates of cloud top height and cover for optically thick clouds but, where optically thin clouds overlap, returns a height between the two layers. CC4CL is a unique, coherent, multi-instrument cloud property retrieval framework applicable to passive sensor data of several EO missions. Through its flexibility, CC4CL offers the opportunity for combining a variety of historic and current EO missions into one dataset, which, compared to single sensor retrievals, is improved in terms of accuracy and temporal sampling.
NASA Astrophysics Data System (ADS)
Toliver, Paul; Ozdur, Ibrahim; Agarwal, Anjali; Woodward, T. K.
2013-05-01
In this paper, we describe a detailed performance comparison of alternative single-pixel, single-mode LIDAR architectures including (i) linear-mode APD-based direct-detection, (ii) optically-preamplified PIN receiver, (iii) PINbased coherent-detection, and (iv) Geiger-mode single-photon-APD counting. Such a comparison is useful when considering next-generation LIDAR on a chip, which would allow one to leverage extensive waveguide-based structures and processing elements developed for telecom and apply them to small form-factor sensing applications. Models of four LIDAR transmit and receive systems are described in detail, which include not only the dominant sources of receiver noise commonly assumed in each of the four detection limits, but also additional noise terms present in realistic implementations. These receiver models are validated through the analysis of detection statistics collected from an experimental LIDAR testbed. The receiver is reconfigurable into four modes of operation, while transmit waveforms and channel characteristics are held constant. The use of a diffuse hard target highlights the importance of including speckle noise terms in the overall system analysis. All measurements are done at 1550 nm, which offers multiple system advantages including less stringent eye safety requirements and compatibility with available telecom components, optical amplification, and photonic integration. Ultimately, the experimentally-validated detection statistics can be used as part of an end-to-end system model for projecting rate, range, and resolution performance limits and tradeoffs of alternative integrated LIDAR architectures.
Forward and inverse effects of the complete electrode model in neonatal EEG
Lew, S.; Wolters, C. H.
2016-01-01
This paper investigates finite element method-based modeling in the context of neonatal electroencephalography (EEG). In particular, the focus lies on electrode boundary conditions. We compare the complete electrode model (CEM) with the point electrode model (PEM), which is the current standard in EEG. In the CEM, the voltage experienced by an electrode is modeled more realistically as the integral average of the potential distribution over its contact surface, whereas the PEM relies on a point value. Consequently, the CEM takes into account the subelectrode shunting currents, which are absent in the PEM. In this study, we aim to find out how the electrode voltage predicted by these two models differ, if standard size electrodes are attached to a head of a neonate. Additionally, we study voltages and voltage variation on electrode surfaces with two source locations: 1) next to the C6 electrode and 2) directly under the Fz electrode and the frontal fontanel. A realistic model of a neonatal head, including a skull with fontanels and sutures, is used. Based on the results, the forward simulation differences between CEM and PEM are in general small, but significant outliers can occur in the vicinity of the electrodes. The CEM can be considered as an integral part of the outer head model. The outcome of this study helps understanding volume conduction of neonatal EEG, since it enlightens the role of advanced skull and electrode modeling in forward and inverse computations. NEW & NOTEWORTHY The effect of the complete electrode model on electroencephalography forward and inverse computations is explored. A realistic neonatal head model, including a skull structure with fontanels and sutures, is used. The electrode and skull modeling differences are analyzed and compared with each other. The results suggest that the complete electrode model can be considered as an integral part of the outer head model. To achieve optimal source localization results, accurate electrode modeling might be necessary. PMID:27852731
Navigation system for minimally invasive esophagectomy: experimental study in a porcine model.
Nickel, Felix; Kenngott, Hannes G; Neuhaus, Jochen; Sommer, Christof M; Gehrig, Tobias; Kolb, Armin; Gondan, Matthias; Radeleff, Boris A; Schaible, Anja; Meinzer, Hans-Peter; Gutt, Carsten N; Müller-Stich, Beat-Peter
2013-10-01
Navigation systems potentially facilitate minimally invasive esophagectomy and improve patient outcome by improving intraoperative orientation, position estimation of instruments, and identification of lymph nodes and resection margins. The authors' self-developed navigation system is highly accurate in static environments. This study aimed to test the overall accuracy of the navigation system in a realistic operating room scenario and to identify the different sources of error altering accuracy. To simulate a realistic environment, a porcine model (n = 5) was used with endoscopic clips in the esophagus as navigation targets. Computed tomography imaging was followed by image segmentation and target definition with the medical imaging interaction toolkit software. Optical tracking was used for registration and localization of animals and navigation instruments. Intraoperatively, the instrument was displayed relative to segmented organs in real time. The target registration error (TRE) of the navigation system was defined as the distance between the target and the navigation instrument tip. The TRE was measured on skin targets with the animal in the 0° supine and 25° anti-Trendelenburg position and on the esophagus during laparoscopic transhiatal preparation. On skin targets, the TRE was significantly higher in the 25° position, at 14.6 ± 2.7 mm, compared with the 0° position, at 3.2 ± 1.3 mm. The TRE on the esophagus was 11.2 ± 2.4 mm. The main source of error was soft tissue deformation caused by intraoperative positioning, pneumoperitoneum, surgical manipulation, and tissue dissection. The navigation system obtained acceptable accuracy with a minimally invasive transhiatal approach to the esophagus in a realistic experimental model. Thus the system has the potential to improve intraoperative orientation, identification of lymph nodes and adequate resection margins, and visualization of risk structures. Compensation methods for soft tissue deformation may lead to an even more accurate navigation system in the future.
The Climate Disruption Challenge for Water Security in a Growing World
NASA Astrophysics Data System (ADS)
Paxton, L. J.; Nix, M.; Ihde, A.; MacDonald, L. H.; Parker, C.; Schaefer, R. K.; Weiss, M.; Babin, S. M.; Swartz, W. H.; Schloman, J.
2012-12-01
Climate disruption, the increasingly large and erratic departures of weather and climate from the benign conditions of the last one hundred years, is the greatest challenge to the long-term stability of world governments. Population growth, food and water security, energy supplies, and economic factors are, to some degree, within the control of governance and policy and all of these are impacted by climate disruption. Climate disruption, on the other hand, is not amenable to direct modification on the short timescales that commonly dictate governmental policy and human response. Global average temperatures will continue to increase even if there were immediate, profound changes in emission scenarios. Policy makers are faced with the very practical and immediate problem of determining what can one reasonably do to ameliorate the impact of climate disruption. The issue from a policy viewpoint is: how does one make effective policy when faced with a situation in which there are varied viewpoints in competition. How does one establish a consensus for action? What information "speaks" to policy makers? Water security is one such issue and provides an important, immediate, and tangible device to use when we examine how one can determine what policies can be effectively pursued. The Global Assimilation of Information for Action (GAIA) project creates a support environment to address the impact of climate disruption on global, national, regional, and/or local interests. The basic research community is concerned with the scientific aspects of predicting climate change in terms of environmental parameters such as rainfall, temperature and humidity while decision makers must deal with planning for a world that may be very different from the one we have grown accustomed to. Decision makers must deal with the long-term impacts on public health, agriculture, economic productivity, security, extreme weather, etc in an environment that has come to focus on short-term issues. To complicate matters, the information available from the climate studies community is couched in terms of model projections with "uncertainties" and a choice of emission scenarios that are often expressed in terms of the results of computer simulations and model output. GAIA develops actionable information and explores the interactions of policy and practice. Part of this framework is the development of "games". These realistic games include the elements of both agent-based and role simulation games in which subject matter experts interact in a realistic scenario to explore courses of action and their outcomes based on realistic, projected environments. We will present examples of some of the past work done at APL and examples of collaborative or competitive games that could be used to explore climate disruption in terms of social, political, and economic impacts. These games provide immediate, "tactile" experience of the implications of a choice of policy. In this talk we will suggest how this tool can be applied to problems like the Colorado River Basin or the Brahmaputra.
NASA Technical Reports Server (NTRS)
Toksoz, M. Nafi
1988-01-01
The long-term objective of this project is to interpret NASA's Crustal Dynamics measurements (SLR) in the Eastern Mediterranean region in terms of relative plate movements and intraplate deformation. The approach is to combine realistic modeling studies with analysis of available geophysical and geological observations to provide a framework for interpreting NASA's measurements. This semi-annual report concentrates on recent results regarding the tectonics of Anatolia and surrounding regions from ground based observations. Also reported on briefly is progress in the use of the Global Positioning System to densify SLR observations in the Eastern Mediterranean. Reference is made to the previous annual report for a discussion of modeling results.
Realistic simulations of a cyclotron spiral inflector within a particle-in-cell framework
NASA Astrophysics Data System (ADS)
Winklehner, Daniel; Adelmann, Andreas; Gsell, Achim; Kaman, Tulin; Campo, Daniela
2017-12-01
We present an upgrade to the particle-in-cell ion beam simulation code opal that enables us to run highly realistic simulations of the spiral inflector system of a compact cyclotron. This upgrade includes a new geometry class and field solver that can handle the complicated boundary conditions posed by the electrode system in the central region of the cyclotron both in terms of particle termination, and calculation of self-fields. Results are benchmarked against the analytical solution of a coasting beam. As a practical example, the spiral inflector and the first revolution in a 1 MeV /amu test cyclotron, located at Best Cyclotron Systems, Inc., are modeled and compared to the simulation results. We find that opal can now handle arbitrary boundary geometries with relative ease. Simulated injection efficiencies and beam shape compare well with measured efficiencies and a preliminary measurement of the beam distribution after injection.
NASA Astrophysics Data System (ADS)
Escobar, Rodrigo; Akopian, David; Boppana, Rajendra
2015-03-01
Remote health monitoring systems involve energy-constrained devices, such as sensors and mobile gateways. Current data formats for communication of health data, such as DICOM and HL7, were not designed for multi-sensor applications or to enable the management of power-constrained devices in health monitoring processes. In this paper, a data format suitable for collection of multiple sensor data, including readings and other operational parameters is presented. By using the data format, the system management can assess energy consumptions and plan realistic monitoring scenarios. The proposed data format not only outperforms other known data formats in terms of readability, flexibility, interoperability and validation of compliant documents, but also enables energy assessment capability for realistic data collection scenarios and maintains or even reduces the overhead introduced due to formatting. Additionally, we provide analytical methods to estimate incremental energy consumption by various sensors and experiments to measure the actual battery drain on smartphones.
Design for and efficient dynamic climate model with realistic geography
NASA Technical Reports Server (NTRS)
Suarez, M. J.; Abeles, J.
1984-01-01
The long term climate sensitivity which include realistic atmospheric dynamics are severely restricted by the expense of integrating atmospheric general circulation models are discussed. Taking as an example models used at GSFC for this dynamic model is an alternative which is of much lower horizontal or vertical resolution. The model of Heid and Suarez uses only two levels in the vertical and, although it has conventional grid resolution in the meridional direction, horizontal resolution is reduced by keeping only a few degrees of freedom in the zonal wavenumber spectrum. Without zonally asymmetric forcing this model simulates a day in roughly 1/2 second on a CRAY. The model under discussion is a fully finite differenced, zonally asymmetric version of the Heid-Suarez model. It is anticipated that speeds can be obtained a few seconds a day roughly 50 times faster than moderate resolution, multilayer GCM's.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Source-receptor matrix calculation with a Source-receptor matrix calculation with a backward mode
NASA Astrophysics Data System (ADS)
Seibert, P.; Frank, A.
2003-08-01
The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, ...). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.
Accuracy-preserving source term quadrature for third-order edge-based discretization
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Liu, Yi
2017-09-01
In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.
Fresnel's laws, ceteris paribus.
Wright, Aaron Sidney
2017-08-01
This article is about structural realism, historical continuity, laws of nature, and ceteris paribus clauses. Fresnel's Laws of optics support Structural Realism because they are a scientific structure that has survived theory change. However, the history of Fresnel's Laws which has been depicted in debates over realism since the 1980s is badly distorted. Specifically, claims that J. C. Maxwell or his followers believed in an ontologically-subsistent electromagnetic field, and gave up the aether, before Einstein's annus mirabilis in 1905 are indefensible. Related claims that Maxwell himself did not believe in a luminiferous aether are also indefensible. This paper corrects the record. In order to trace Fresnel's Laws across significant ontological changes, they must be followed past Einstein into modern physics and nonlinear optics. I develop the philosophical implications of a more accurate history, and analyze Fresnel's Laws' historical trajectory in terms of dynamic ceteris paribus clauses. Structuralists have not embraced ceteris paribus laws, but they continue to point to Fresnel's Laws to resist anti-realist arguments from theory change. Fresnel's Laws fit the standard definition of a ceteris paribus law as a law applicable only in particular circumstances. Realists who appeal to the historical continuity of Fresnel's Laws to combat anti-realists must incorporate ceteris paribus laws into their metaphysics. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Junhua; Ji, Zhenming; Chen, Deliang; Kang, Shichang; Fu, Congshen; Duan, Keqin; Shen, Miaogen
2018-06-01
The application of satellite radiance assimilation can improve the simulation of precipitation by numerical weather prediction models. However, substantial quantities of satellite data, especially those derived from low-level (surface-sensitive) channels, are rejected for use because of the difficulty in realistically modeling land surface emissivity and energy budgets. Here, we used an improved land use and leaf area index (LAI) dataset in the WRF-3DVAR assimilation system to explore the benefit of using improved quality of land surface information to improve rainfall simulation for the Shule River Basin in the northeastern Tibetan Plateau as a case study. The results for July 2013 show that, for low-level channels (e.g., channel 3), the underestimation of brightness temperature in the original simulation was largely removed by more realistic land surface information. In addition, more satellite data could be utilized in the assimilation because the realistic land use and LAI data allowed more satellite radiance data to pass the deviation test and get used by the assimilation, which resulted in improved initial driving fields and better simulation in terms of temperature, relative humidity, vertical convection, and cumulative precipitation.
NASA Astrophysics Data System (ADS)
Ghara, Raghunath; Mellema, Garrelt; Giri, Sambit K.; Choudhury, T. Roy; Datta, Kanan K.; Majumdar, Suman
2018-05-01
Three-dimensional radiative transfer simulations of the epoch of reionization can produce realistic results, but are computationally expensive. On the other hand, simulations relying on one-dimensional radiative transfer solutions are faster but limited in accuracy due to their more approximate nature. Here, we compare the performance of the reionization simulation codes GRIZZLY and C2-RAY which use 1D and 3D radiative transfer schemes, respectively. The comparison is performed using the same cosmological density fields, halo catalogues, and source properties. We find that the ionization maps, as well as the 21-cm signal maps from these two simulations are very similar even for complex scenarios which include thermal feedback on low-mass haloes. The comparison between the schemes in terms of the statistical quantities such as the power spectrum of the brightness temperature fluctuation agrees with each other within 10 per cent error throughout the entire reionization history. GRIZZLY seems to perform slightly better than the seminumerical approaches considered in Majumdar et al. which are based on the excursion set principle. We argue that GRIZZLY can be efficiently used for exploring parameter space, establishing observations strategies, and estimating parameters from 21-cm observations.
The Influence of Age and Skull Conductivity on Surface and Subdermal Bipolar EEG Leads
Wendel, Katrina; Väisänen, Juho; Seemann, Gunnar; Hyttinen, Jari; Malmivuo, Jaakko
2010-01-01
Bioelectric source measurements are influenced by the measurement location as well as the conductive properties of the tissues. Volume conductor effects such as the poorly conducting bones or the moderately conducting skin are known to affect the measurement precision and accuracy of the surface electroencephalography (EEG) measurements. This paper investigates the influence of age via skull conductivity upon surface and subdermal bipolar EEG measurement sensitivity conducted on two realistic head models from the Visible Human Project. Subdermal electrodes (a.k.a. subcutaneous electrodes) are implanted on the skull beneath the skin, fat, and muscles. We studied the effect of age upon these two electrode types according to the scalp-to-skull conductivity ratios of 5, 8, 15, and 30 : 1. The effects on the measurement sensitivity were studied by means of the half-sensitivity volume (HSV) and the region of interest sensitivity ratio (ROISR). The results indicate that the subdermal implantation notably enhances the precision and accuracy of EEG measurements by a factor of eight compared to the scalp surface measurements. In summary, the evidence indicates that both surface and subdermal EEG measurements benefit better recordings in terms of precision and accuracy on younger patients. PMID:20130812
NASA Astrophysics Data System (ADS)
Arneitz, P.; Leonhardt, R.; Fabian, K.; Egli, R.
2017-12-01
Historical and paleomagnetic data are the two main sources of information about the long-term geomagnetic field evolution. Historical observations extend to the late Middle Ages, and prior to the 19th century, they consisted mainly of pure declination measurements from navigation and orientation logs. Field reconstructions going back further in time rely solely on magnetization acquired by rocks, sediments, and archaeological artefacts. The combined dataset is characterized by a strongly inhomogeneous spatio-temporal distribution and highly variable data reliability and quality. Therefore, an adequate weighting of the data that correctly accounts for data density, type, and realistic error estimates represents the major challenge for an inversion approach. Until now, there has not been a fully self-consistent geomagnetic model that correctly recovers the variation of the geomagnetic dipole together with the higher-order spherical harmonics. Here we present a new geomagnetic field model for the last 4 kyrs based on historical, archeomagnetic and volcanic records. The iterative Bayesian inversion approach targets the implementation of reliable error treatment, which allows different record types to be combined in a fully self-consistent way. Modelling results will be presented along with a thorough analysis of model limitations, validity and sensitivity.
The non-uniformity of fossil preservation.
Holland, Steven M
2016-07-19
The fossil record provides the primary source of data for calibrating the origin of clades. Although minimum ages of clades are given by the oldest preserved fossil, these underestimate the true age, which must be bracketed by probabilistic methods based on multiple fossil occurrences. Although most of these methods assume uniform preservation rates, this assumption is unsupported over geological timescales. On geologically long timescales (more than 10 Myr), the origin and cessation of sedimentary basins, and long-term variations in tectonic subsidence, eustatic sea level and sedimentation rate control the availability of depositional facies that preserve the environments in which species lived. The loss of doomed sediments, those with a low probability of preservation, imparts a secular trend to fossil preservation. As a result, the fossil record is spatially and temporally non-uniform. Models of fossil preservation should reflect this non-uniformity by using empirical estimates of fossil preservation that are spatially and temporally partitioned, or by using indirect proxies of fossil preservation. Geologically, realistic models of preservation will provide substantially more reliable estimates of the origination of clades.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. © 2016 The Author(s).
Primary productivity and the prospects for biofuels in the United Kingdom
NASA Astrophysics Data System (ADS)
Lawson, G. J.; Callaghan, T. V.
1983-09-01
Estimates of land use and plant productivity are combined to predict total annual primary production in the UK as 252 million tonnes dry matter (10.5 t ha-1yr-1). Annual above ground production is predicted to be 165 Mt (6.9 t ha-1yr-1). Within these totals, intensive agriculture contributes 60%, productive woodland 8%, natural vegetation 26% and urban vegetation 5%. However, only 25% of total plant production is cropped by man and animals, and most of this is subsequently discarded as wastes and residues. 2112 PJ of organic material is available for fuel without reducing food or fibre production, but since much of this could not be economically collected, 859 PJ is calculated as a more realistic biofuel contribution by the year 2000. After deducting 50% conversion losses, this could save P1 billion (1979 prices) in oil imports. Short rotation energy plantations, forest residues, coppice woodlands, animal and crop wastes, industrial and domestic wastes, catch crops, natural vegetation and urban vegetation all have immediate or short term potential as biofuel sources. Sensitive planning is required to reduce environmental impact, but in some cases more diverse wildlife habitats may be created.
space and time in ergodic gemorphology
NASA Astrophysics Data System (ADS)
Seyed Ebrahimi, S.
2009-04-01
Epistemological perspectives rank among the main factors contributing to word coinage in scientific literature. New words come into existence as new epistemological fields evolve (Sack1992). Each field, however, tends to impose its own interpretation on the words associated with it. To illustrate the point, certain words retain formal structure and relate to the same subject area, but can nonetheless be employed in totally different epistemological fields. The semantic content undergoes a drastic change in each case. Granted that numerous theoretical stances can be identified within the discipline of geomorphology, acquaintance with the semantic content of certain words should help toward a more realistic understanding of what the theorists and practitioners in this field have in mind. This paper, which is based on an analysis of these three theoretical positions aims to highlight the importance of the semantic content of the term equilibrium as utilized in ergodic geomorphology, and the ways in which it is construed from different viewpoints. For this purpose, the authors have of necessity availed themselves of dated sources, rather than up-to-date ones. The paper also argues that familiarity with concepts such as these will ensure a better grasp of the paradigms in question.
E-Mail Molecules—Individualizing the Large Lecture Class
NASA Astrophysics Data System (ADS)
Wamser, Carl C.
2003-11-01
All students in the organic chemistry class are assigned a unique set of nine molecules to report on as optional extra credit assignments. The molecules are taken from a list containing over 200 molecules on the class Web site; they represent an assortment of biologically relevant compounds, from acetaminophen to yohimbine. Once a week, students may submit information about one of the molecules for two points extra credit (where the course includes a total of over 600 points from traditional quizzes and exams). The information requested about the molecules varies slightly each term as student expertise grows, for example, molecular formula, hybridizations, functional groups, or number of stereocenters, but always includes biological relevance and sources of information. Initially students submitted data directly to the instructor by e-mail, but submissions now are handled by a Web-based course management system (WebCT). The goal is to give students individualized assignments that are relatively realistic in light of their future careers in health sciences. Nearly all of the students do some of the molecules, and many students do all of them. About 30 40% of the students who do the assignments regularly gain a grade benefit. Student responses to the exercise have been positive.
Hierarchical statistical modeling of xylem vulnerability to cavitation.
Ogle, Kiona; Barber, Jarrett J; Willson, Cynthia; Thompson, Brenda
2009-01-01
Cavitation of xylem elements diminishes the water transport capacity of plants, and quantifying xylem vulnerability to cavitation is important to understanding plant function. Current approaches to analyzing hydraulic conductivity (K) data to infer vulnerability to cavitation suffer from problems such as the use of potentially unrealistic vulnerability curves, difficulty interpreting parameters in these curves, a statistical framework that ignores sampling design, and an overly simplistic view of uncertainty. This study illustrates how two common curves (exponential-sigmoid and Weibull) can be reparameterized in terms of meaningful parameters: maximum conductivity (k(sat)), water potential (-P) at which percentage loss of conductivity (PLC) =X% (P(X)), and the slope of the PLC curve at P(X) (S(X)), a 'sensitivity' index. We provide a hierarchical Bayesian method for fitting the reparameterized curves to K(H) data. We illustrate the method using data for roots and stems of two populations of Juniperus scopulorum and test for differences in k(sat), P(X), and S(X) between different groups. Two important results emerge from this study. First, the Weibull model is preferred because it produces biologically realistic estimates of PLC near P = 0 MPa. Second, stochastic embolisms contribute an important source of uncertainty that should be included in such analyses.
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2016-03-01
Context. Many problems in astrophysics feature flows which are close to hydrostatic equilibrium. However, standard numerical schemes for compressible hydrodynamics may be deficient in approximating this stationary state, where the pressure gradient is nearly balanced by gravitational forces. Aims: We aim to develop a second-order well-balanced scheme for the Euler equations. The scheme is designed to mimic a discrete version of the hydrostatic balance. It therefore can resolve a discrete hydrostatic equilibrium exactly (up to machine precision) and propagate perturbations, on top of this equilibrium, very accurately. Methods: A local second-order hydrostatic equilibrium preserving pressure reconstruction is developed. Combined with a standard central gravitational source term discretization and numerical fluxes that resolve stationary contact discontinuities exactly, the well-balanced property is achieved. Results: The resulting well-balanced scheme is robust and simple enough to be very easily implemented within any existing computer code that solves time explicitly or implicitly the compressible hydrodynamics equations. We demonstrate the performance of the well-balanced scheme for several astrophysically relevant applications: wave propagation in stellar atmospheres, a toy model for core-collapse supernovae, convection in carbon shell burning, and a realistic proto-neutron star.
Space Radiation and Human Exposures, A Primer.
Nelson, Gregory A
2016-04-01
The space radiation environment is a complex field comprised primarily of charged particles spanning energies over many orders of magnitude. The principal sources of these particles are galactic cosmic rays, the Sun and the trapped radiation belts around the earth. Superimposed on a steady influx of cosmic rays and a steady outward flux of low-energy solar wind are short-term ejections of higher energy particles from the Sun and an 11-year variation of solar luminosity that modulates cosmic ray intensity. Human health risks are estimated from models of the radiation environment for various mission scenarios, the shielding of associated vehicles and the human body itself. Transport models are used to propagate the ambient radiation fields through realistic shielding levels and materials to yield radiation field models inside spacecraft. Then, informed by radiobiological experiments and epidemiology studies, estimates are made for various outcome measures associated with impairments of biological processes, losses of function or mortality. Cancer-associated risks have been formulated in a probabilistic model while management of non-cancer risks are based on permissible exposure limits. This article focuses on the various components of the space radiation environment and the human exposures that it creates.
Development of a countywide recycling program for Polk County, Wisconsin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Designing a recycling program for a rural county presents many more challenges in terms of transportation of materials and funding sources. Rural counties and communities typically have much smaller budgets and resources to draw from. In order to create a program that could realistically be implemented and also have widespread support, it was decided to allow ample time for public involvement in the design process. A multi-faceted approach was adopted to facilitate participation by individuals involved in solid waste handling and the general public. The approach included the use of surveys, formation of an advisory committee, public meetings, presentations tomore » civic groups, personal contacts, news releases, and a logo contest. The public involvement turned out to be invaluable. Throughout the year, many concepts and ideas were presented for feedback. Consequently, some aspects of the program were modified, some were scrapped altogether, and a few new ideas were added. Undoubtedly, the process of refinement will continue as the program moves into the implementation phase. The extensive public involvement has resulted in strong support for the countywide program from many sectors, including private haulers and recycling businesses, local officials and county board supervisors, civic groups, environmental groups, and the general public.« less
The non-uniformity of fossil preservation
2016-01-01
The fossil record provides the primary source of data for calibrating the origin of clades. Although minimum ages of clades are given by the oldest preserved fossil, these underestimate the true age, which must be bracketed by probabilistic methods based on multiple fossil occurrences. Although most of these methods assume uniform preservation rates, this assumption is unsupported over geological timescales. On geologically long timescales (more than 10 Myr), the origin and cessation of sedimentary basins, and long-term variations in tectonic subsidence, eustatic sea level and sedimentation rate control the availability of depositional facies that preserve the environments in which species lived. The loss of doomed sediments, those with a low probability of preservation, imparts a secular trend to fossil preservation. As a result, the fossil record is spatially and temporally non-uniform. Models of fossil preservation should reflect this non-uniformity by using empirical estimates of fossil preservation that are spatially and temporally partitioned, or by using indirect proxies of fossil preservation. Geologically, realistic models of preservation will provide substantially more reliable estimates of the origination of clades. This article is part of the themed issue ‘Dating species divergences using rocks and clocks’. PMID:27325828
BenRedjem Romdhane, Yosr; Elbour, Monia; Carbone, Marianna; Ciavatta, Maria Letizia; Gavagnin, Margherita; Mathieu, Véronique; Lefranc, Florence; Ktari, Leila; Ben Mustapha, Karim; Boudabous, Abdellatif; Kiss, Robert
2016-01-01
Marine sponges of the Irciniidae family contain both bioactive furanosesterterpene tetronic acids (FTAs) and prenylated hydroquinones (PHQs). Both classes of compounds are known for their anti-inflammatory, antioxidant, and antimicrobial properties and known to display growth inhibitory effects against various human tumor cell lines. However, the different experimental conditions of the reported in vitro bioassays, carried out on different cancer cell lines within separate studies, prevent realistic actual discrimination between the two classes of compounds from being carried out in terms of growth inhibitory effects. In the present work, a chemical investigation of irciniid sponges from Tunisian coasts led to the purification of three known FTAs and three known PHQs. The in vitro growth inhibitory properties of the six purified compounds have been evaluated in the same experiment in a panel of five human and one murine cancer cell lines displaying various levels of sensitivity to proapoptotic stimuli. Surprisingly, FTAs and PHQs elicited distinct profiles of growth inhibitory-responses, differing by one to two orders of magnitude in favor of the PHQs in all cell lines. The obtained comparative results are discussed in the light of a better selection of drug candidates from natural sources. PMID:27597966
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul L. Wichlacz
2003-09-01
This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less
Modelling remediation scenarios in historical mining catchments.
Gamarra, Javier G P; Brewer, Paul A; Macklin, Mark G; Martin, Katherine
2014-01-01
Local remediation measures, particularly those undertaken in historical mining areas, can often be ineffective or even deleterious because erosion and sedimentation processes operate at spatial scales beyond those typically used in point-source remediation. Based on realistic simulations of a hybrid landscape evolution model combined with stochastic rainfall generation, we demonstrate that similar remediation strategies may result in differing effects across three contrasting European catchments depending on their topographic and hydrologic regimes. Based on these results, we propose a conceptual model of catchment-scale remediation effectiveness based on three basic catchment characteristics: the degree of contaminant source coupling, the ratio of contaminated to non-contaminated sediment delivery, and the frequency of sediment transport events.
MCNP simulation of a Theratron 780 radiotherapy unit.
Miró, R; Soler, J; Gallardo, S; Campayo, J M; Díez, S; Verdú, G
2005-01-01
A Theratron 780 (MDS Nordion) 60Co radiotherapy unit has been simulated with the Monte Carlo code MCNP. The unit has been realistically modelled: the cylindrical source capsule and its housing, the rectangular collimator system, both the primary and secondary jaws and the air gaps between the components. Different collimator openings, ranging from 5 x 5 cm2 to 20 x 20 cm2 (narrow and broad beams) at a source-surface distance equal to 80 cm have been used during the study. In the present work, we have calculated spectra as a function of field size. A study of the variation of the electron contamination of the 60Co beam has also been performed.
NASA Astrophysics Data System (ADS)
Barnard, Harold S.; MacDowell, A. A.; Parkinson, D. Y.; Mandal, P.; Czabaj, M.; Gao, Y.; Maillet, E.; Blank, B.; Larson, N. M.; Ritchie, R. O.; Gludovatz, B.; Acevedo, C.; Liu, D.
2017-06-01
At the Advanced Light Source (ALS), Beamline 8.3.2 performs hard X-ray micro-tomography under conditions of high temperature, pressure, mechanical loading, and other realistic conditions using environmental test cells. With scan times of 10s-100s of seconds, the microstructural evolution of materials can be directly observed over multiple time steps spanning prescribed changes in the sample environment. This capability enables in-situ quasi-static mechanical testing of materials. We present an overview of our in-situ mechanical testing capabilities and recent hardware developments that enable flexural testing at high temperature and in combination with acoustic emission analysis.
Photometry in the dark: time dependent visibility of low intensity light sources.
Poelman, Dirk; Smet, Philippe F
2010-12-06
This paper aims at describing the perceived brightness of persistent luminescent materials for emergency signage. In case of emergency, typically, a fully light adapted person is left in the dark, except for the emergency sign. The available photometric models cannot describe visibility of such light source, as they do not consider the slow dark adaptation of the human eye. The model proposed here fully takes into account the shift from photopic to scotopic vision, the related shift in spectral sensitivity and the dark adaptation. The resulting metric is a 'visibility index' and preliminary tests show that it more realistically describes the perceived brightness of persistent luminescent materials than the common photometric standards.
An asymptotic theory of supersonic propeller noise
NASA Technical Reports Server (NTRS)
Envia, Edmane
1992-01-01
A theory for predicting the noise field of supersonic propellers with realistic blade geometries is presented. The theory, which utilizes a large-blade-count approximation, provides an efficient formula for predicting the radiation of sound from all three sources of propeller noise. Comparisons with a full numerical integration indicate that the levels predicted by this formula are quite accurate. Calculations also show that, for high speed propellers, the noise radiated by the Lighthill quadrupole source is rather substantial when compared with the noise radiated by the blade thickness and loading sources. Results from a preliminary application of the theory indicate that the peak noise level generated by a supersonic propeller initially increases with increasing tip helical Mach number, but is eventually reaches a plateau and does not increase further. The predicted trend shows qualitative agreement with the experimental observations.
Samadi, Samareh; Amini, Ladan; Cosandier-Rimélé, Delphine; Soltanian-Zadeh, Hamid; Jutten, Christian
2013-01-01
In this paper, we present a fast method to extract the sources related to interictal epileptiform state. The method is based on general eigenvalue decomposition using two correlation matrices during: 1) periods including interictal epileptiform discharges (IED) as a reference activation model and 2) periods excluding IEDs or abnormal physiological signals as background activity. After extracting the most similar sources to the reference or IED state, IED regions are estimated by using multiobjective optimization. The method is evaluated using both realistic simulated data and actual intracerebral electroencephalography recordings of patients suffering from focal epilepsy. These patients are seizure-free after the resective surgery. Quantitative comparisons of the proposed IED regions with the visually inspected ictal onset zones by the epileptologist and another method of identification of IED regions reveal good performance. PMID:23428609
Car indoor air pollution - analysis of potential sources
2011-01-01
The population of industrialized countries such as the United States or of countries from the European Union spends approximately more than one hour each day in vehicles. In this respect, numerous studies have so far addressed outdoor air pollution that arises from traffic. By contrast, only little is known about indoor air quality in vehicles and influences by non-vehicle sources. Therefore the present article aims to summarize recent studies that address i.e. particulate matter exposure. It can be stated that although there is a large amount of data present for outdoor air pollution, research in the area of indoor air quality in vehicles is still limited. Especially, knowledge on non-vehicular sources is missing. In this respect, an understanding of the effects and interactions of i.e. tobacco smoke under realistic automobile conditions should be achieved in future. PMID:22177291
Economic challenges of hybrid microgrid: An analysis and approaches for rural electrification
NASA Astrophysics Data System (ADS)
Habibullah, Mohammad; Mahmud, Khizir; Koçar, Günnur; Islam, A. K. M. Sadrul; Salehin, Sayedus
2017-06-01
This paper focuses on the integration of three renewable resources: biogas, wind energy and solar energy, utilizing solar PV panels, a biogas generator, and a wind turbine, respectively, to analyze the technical and economic challenges of a hybrid micro-gird. The integration of these sources has been analyzed and optimized based on realistic data for a real location. Different combinations of these sources have been analyzed to find out the optimized combination based on the efficiency and the minimum cost of electricity (COE). Wind and solar energy are considered as the primary sources of power generation during off-peak hours, and any excess power is used to charge a battery bank. During peak hours, biogas generators produce power to support the additional demand. A business strategy to implement the integrated optimized system in rural areas is discussed.
Discontinuous model with semi analytical sheath interface for radio frequency plasma
NASA Astrophysics Data System (ADS)
Miyashita, Masaru
2016-09-01
Sumitomo Heavy Industries, Ltd. provide many products utilizing plasma. In this study, we focus on the Radio Frequency (RF) plasma source by interior antenna. The plasma source is expected to be high density and low metal contamination. However, the sputtering the antenna cover by high energy ion from sheath voltage still have been problematic. We have developed the new model which can calculate sheath voltage wave form in the RF plasma source for realistic calculation time. This model is discontinuous that electronic fluid equation in plasma connect to usual passion equation in antenna cover and chamber with semi analytical sheath interface. We estimate the sputtering distribution based on calculated sheath voltage waveform by this model, sputtering yield and ion energy distribution function (IEDF) model. The estimated sputtering distribution reproduce the tendency of experimental results.
Modeling the response of small myelinated axons in a compound nerve to kilohertz frequency signals
NASA Astrophysics Data System (ADS)
Pelot, N. A.; Behrend, C. E.; Grill, W. M.
2017-08-01
Objective. There is growing interest in electrical neuromodulation of peripheral nerves, particularly autonomic nerves, to treat various diseases. Electrical signals in the kilohertz frequency (KHF) range can produce different responses, including conduction block. For example, EnteroMedics’ vBloc® therapy for obesity delivers 5 kHz stimulation to block the abdominal vagus nerves, but the mechanisms of action are unclear. Approach. We developed a two-part computational model, coupling a 3D finite element model of a cuff electrode around the human abdominal vagus nerve with biophysically-realistic electrical circuit equivalent (cable) model axons (1, 2, and 5.7 µm in diameter). We developed an automated algorithm to classify conduction responses as subthreshold (transmission), KHF-evoked activity (excitation), or block. We quantified neural responses across kilohertz frequencies (5-20 kHz), amplitudes (1-8 mA), and electrode designs. Main results. We found heterogeneous conduction responses across the modeled nerve trunk, both for a given parameter set and across parameter sets, although most suprathreshold responses were excitation, rather than block. The firing patterns were irregular near transmission and block boundaries, but otherwise regular, and mean firing rates varied with electrode-fibre distance. Further, we identified excitation responses at amplitudes above block threshold, termed ‘re-excitation’, arising from action potentials initiated at virtual cathodes. Excitation and block thresholds decreased with smaller electrode-fibre distances, larger fibre diameters, and lower kilohertz frequencies. A point source model predicted a larger fraction of blocked fibres and greater change of threshold with distance as compared to the realistic cuff and nerve model. Significance. Our findings of widespread asynchronous KHF-evoked activity suggest that conduction block in the abdominal vagus nerves is unlikely with current clinical parameters. Our results indicate that compound neural or downstream muscle force recordings may be unreliable as quantitative measures of neural activity for in vivo studies or as biomarkers in closed-loop clinical devices.
Modeling the response of small myelinated axons in a compound nerve to kilohertz frequency signals.
Pelot, N A; Behrend, C E; Grill, W M
2017-08-01
There is growing interest in electrical neuromodulation of peripheral nerves, particularly autonomic nerves, to treat various diseases. Electrical signals in the kilohertz frequency (KHF) range can produce different responses, including conduction block. For example, EnteroMedics' vBloc ® therapy for obesity delivers 5 kHz stimulation to block the abdominal vagus nerves, but the mechanisms of action are unclear. We developed a two-part computational model, coupling a 3D finite element model of a cuff electrode around the human abdominal vagus nerve with biophysically-realistic electrical circuit equivalent (cable) model axons (1, 2, and 5.7 µm in diameter). We developed an automated algorithm to classify conduction responses as subthreshold (transmission), KHF-evoked activity (excitation), or block. We quantified neural responses across kilohertz frequencies (5-20 kHz), amplitudes (1-8 mA), and electrode designs. We found heterogeneous conduction responses across the modeled nerve trunk, both for a given parameter set and across parameter sets, although most suprathreshold responses were excitation, rather than block. The firing patterns were irregular near transmission and block boundaries, but otherwise regular, and mean firing rates varied with electrode-fibre distance. Further, we identified excitation responses at amplitudes above block threshold, termed 're-excitation', arising from action potentials initiated at virtual cathodes. Excitation and block thresholds decreased with smaller electrode-fibre distances, larger fibre diameters, and lower kilohertz frequencies. A point source model predicted a larger fraction of blocked fibres and greater change of threshold with distance as compared to the realistic cuff and nerve model. Our findings of widespread asynchronous KHF-evoked activity suggest that conduction block in the abdominal vagus nerves is unlikely with current clinical parameters. Our results indicate that compound neural or downstream muscle force recordings may be unreliable as quantitative measures of neural activity for in vivo studies or as biomarkers in closed-loop clinical devices.
Yao, Jie; Li, Ming; Tang, Hua; Wang, Peng-Lai; Zhao, Yu-Xiao; McGrath, Colman; Mattheos, Nikos
2017-03-01
While research in terms of patient-centered care in implant therapy is growing, few studies have investigated patients' initial perceptions prior to consultation with the implant dentist. The aim of this cross-sectional study was to capture patients' initial information level, perceptions, as well as expectations from the implant therapy. A 34-item questionnaire was developed to investigate patients' preoperative information, perceptions and expectations from treatment with Dental Implants. The study was conducted in three locations (Hong Kong, SiChuan and JiangSu) during 2014-2015 with 277 patients. The main information source about implant therapy was the dentist or hygienist for less than half of the patients (n = 113, 42%). About 62.8% of participants considered that they were in general informed about implants, but only 17.7% felt confident with the information they had. More than 30% of the sample appeared to maintain dangerous misperceptions about Dental Implants: "Dental Implants require less care than natural teeth"; "Treatment with Dental Implants is appropriate for all patients with missing teeth"; "Dental Implants last longer than natural teeth"; and "Treatments with Dental Implants have no risks or complications." Patients were divided when asked whether "Dental Implants are as functional as natural teeth" (agreement frequency = 52.7%). Expectations from treatment outcome were commonly high, while there was a significant correlation between the overall mean of perception scores and outcome expectation scores (r = 0.32, P < 0.001). Overall, younger subjects (<45 years) and those with higher education level (bachelor and postgraduate) tended to present more realistic perceptions and lower outcome expectations. The majority of patients in this study presented relatively realistic perceptions. However, an alarming portion of the sample presented with inaccurate perceptions and unrealistic expectations, which the dental team would need to diagnose and correct prior to initiating implant treatment. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Spatial Fluctuations of the Intergalactic Temperature-Density Relation After Hydrogen Reionization
NASA Astrophysics Data System (ADS)
Keating, Laura C.; Puchwein, Ewald; Haehnelt, Martin G.
2018-04-01
The thermal state of the post-reionization IGM is sensitive to the timing of reionization and the nature of the ionizing sources. We have modelled here the thermal state of the IGM in cosmological radiative transfer simulations of a realistic, extended, spatially inhomogeneous hydrogen reionization process, carefully calibrated with Lyα forest data. We compare these with cosmological simulations run using a spatially homogeneous ionizing background. The simulations with a realistic growth of ionized regions and a realistic spread in reionization redshifts show, as expected, significant spatial fluctuations in the temperature-density relation (TDR) of the post-reionization IGM. The most recently ionized regions are hottest and exhibit a flatter TDR. In simulations consistent with the average TDR inferred from Lyα forest data, these spatial fluctuations have a moderate but noticeable effect on the statistical properties of the Lyα opacity of the IGM at z ˜ 4 - 6. This should be taken into account in accurate measurements of the thermal properties of the IGM and the free-streaming of dark matter from Lyα forest data in this redshift range. The spatial variations of the TDR predicted by our simulations are, however, smaller by about a factor two than would be necessary to explain the observed large spatial opacity fluctuations on large (≥ 50 h-1 comoving Mpc) scales at z ≳ 5.5.
Climate drift of AMOC, North Atlantic salinity and arctic sea ice in CFSv2 decadal predictions
NASA Astrophysics Data System (ADS)
Huang, Bohua; Zhu, Jieshun; Marx, Lawrence; Wu, Xingren; Kumar, Arun; Hu, Zeng-Zhen; Balmaseda, Magdalena A.; Zhang, Shaoqing; Lu, Jian; Schneider, Edwin K.; Kinter, James L., III
2015-01-01
There are potential advantages to extending operational seasonal forecast models to predict decadal variability but major efforts are required to assess the model fidelity for this task. In this study, we examine the North Atlantic climate simulated by the NCEP Climate Forecast System, version 2 (CFSv2), using a set of ensemble decadal hindcasts and several 30-year simulations initialized from realistic ocean-atmosphere states. It is found that a substantial climate drift occurs in the first few years of the CFSv2 hindcasts, which represents a major systematic bias and may seriously affect the model's fidelity for decadal prediction. In particular, it is noted that a major reduction of the upper ocean salinity in the northern North Atlantic weakens the Atlantic meridional overturning circulation (AMOC) significantly. This freshening is likely caused by the excessive freshwater transport from the Arctic Ocean and weakened subtropical water transport by the North Atlantic Current. A potential source of the excessive freshwater is the quick melting of sea ice, which also causes unrealistically thin ice cover in the Arctic Ocean. Our sensitivity experiments with adjusted sea ice albedo parameters produce a sustainable ice cover with realistic thickness distribution. It also leads to a moderate increase of the AMOC strength. This study suggests that a realistic freshwater balance, including a proper sea ice feedback, is crucial for simulating the North Atlantic climate and its variability.
Spatial fluctuations of the intergalactic temperature-density relation after hydrogen reionization
NASA Astrophysics Data System (ADS)
Keating, Laura C.; Puchwein, Ewald; Haehnelt, Martin G.
2018-07-01
The thermal state of the post-reionization IGM is sensitive to the timing of reionization and the nature of the ionizing sources. We have modelled here the thermal state of the IGM in cosmological radiative transfer simulations of a realistic, extended, spatially inhomogeneous hydrogen reionization process, carefully calibrated with Ly α forest data. We compare these with cosmological simulations run using a spatially homogeneous ionizing background. The simulations with a realistic growth of ionized regions and a realistic spread in reionization redshifts show, as expected, significant spatial fluctuations in the temperature-density relation (TDR) of the post-reionization IGM. The most recently ionized regions are hottest and exhibit a flatter TDR. In simulations consistent with the average TDR inferred from Ly α forest data, these spatial fluctuations have a moderate but noticeable effect on the statistical properties of the Ly α opacity of the IGM at z ˜ 4-6. This should be taken into account in accurate measurements of the thermal properties of the IGM and the free-streaming of dark matter from Ly α forest data in this redshift range. The spatial variations of the TDR predicted by our simulations are, however, smaller by about a factor of 2 than would be necessary to explain the observed large spatial opacity fluctuations on large (≥50 h-1 comoving Mpc) scales atz ≳ 5.5.
Secretary of The Navy Professor
1999-09-30
goal of this research is to develop a predictive capability for the upper ocean circulation and atmospheric interactions using numerical models...assimilation techniques to be used in these models. In addition, we are continuing the task of preparing long-term global surface fluxes for use in ocean...NASA, NSF, and NOAA. APPROACH We are using a suite of models forced with estimates of real winds, with very fine horizontal resolution and realistic
The Role of Sleep in the Health and Resiliency of Military Personnel
2011-04-01
enhancing biases, positive emotion, laughter, and repression of the trauma as a coping mechanism . Similar findings have been observed by others in...computers, phones, video games , and other electronic devices. Realistic or perceived threat to life or of injury The need for instant...Belenky & Balkin, 2006). 3.3 Resiliency in the Military Resiliency is traditionally a term used in mechanical engineering to describe the physical
2013-05-01
91 Provide Other Designated Support ........................................................................ 92 Fundamentals of Civil Support...consist of new work in political science, political philosophy, and law.20 Moral Theory The greatest test of any political idea or philosophy...and hyperbole.1 Common beliefs, more aptly termed myths, were included in the pioneer reseach conducted in the early 1950s, when the social sciences
Seven ways to make a hypertext project fail
NASA Technical Reports Server (NTRS)
Glushko, Robert J.
1990-01-01
Hypertext is an exciting concept, but designing and developing hypertext applications of practical scale is hard. To make a project feasible and successful 'hypertext engineers' must overcome the following problems: (1) developing realistic expectations in the face of hypertext hype; (2) assembling a multidisciplinary project team; (3) establishing and following design guidelines; (4) dealing with installed base constraints; (5) obtaining usable source files; (6) finding appropriate software technology and methods; and (7) overcoming legal uncertainties about intellectual property concerns.
NASA Technical Reports Server (NTRS)
Teske, R. G.
1972-01-01
Type III solar bursts occurring in the absence of solar flares were observed to be accompanied by weak X-radiation. The energy scale of an OSO-3 soft X-ray ion chamber was assessed using realistic theoretical X-ray spectra. Relationships between soft solar X-rays and solar activity were investigated. These included optical studies, the role of the Type III acceleration mechanism in establishing the soft X-ray source volume, H alpha flare intensity variations, and gross magnetic field structure.
Photovoltaic power systems for rural areas of developing countries
NASA Technical Reports Server (NTRS)
Rosenblum, L.; Bifano, W. J.; Hein, G. F.; Ratajczak, A. F.
1979-01-01
Systems technology, reliability, and present and projected costs of photovoltaic systems are discussed using data derived from NASA, Lewis Research Center experience with photovoltaic systems deployed with a variety of users. Operating systems in two villages, one in Upper Volta and the other in southwestern Arizona are described. Energy cost comparisons are presented for photovoltaic systems versus alternative energy sources. Based on present system technology, reliability, and costs, photovoltaics provides a realistic energy option for developing nations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Madison Theresa; Bates, Cameron Russell; Mckigney, Edward Allen
Accurate detector modeling is a requirement to design systems in many non-proliferation scenarios; by determining a Detector’s Response Function (DRF) to incident radiation, it is possible characterize measurements of unknown sources. DRiFT is intended to post-process MCNP® output and create realistic detector spectra. Capabilities currently under development include the simulation of semiconductor, gas, and (as is discussed in this work) scintillator detector physics. Energy spectra and pulse shape discrimination (PSD) trends for incident photon and neutron radiation have been reproduced by DRiFT.
Pineda, Mari-Carmen; Strehlow, Brian; Kamp, Jasmine; Duckworth, Alan; Jones, Ross; Webster, Nicole S
2017-07-12
Dredging can cause increased suspended sediment concentrations (SSCs), light attenuation and sedimentation in marine communities. In order to determine the combined effects of dredging-related pressures on adult sponges, three species spanning different nutritional modes and morphologies were exposed to 5 treatment levels representing realistic dredging scenarios. Most sponges survived under low to moderate turbidity scenarios (SSCs of ≤ 33 mg L -1 , and a daily light integral of ≥0.5 mol photons m -2 d -1 ) for up to 28 d. However, under the highest turbidity scenario (76 mg L -1 , 0.1 mol photons m -2 d -1 ) there was 20% and 90% mortality of the phototrophic sponges Cliona orientalis and Carteriospongia foliascens respectively, and tissue regression in the heterotrophic Ianthella basta. All three sponge species exhibited mechanisms to effectively tolerate dredging-related pressures in the short term (e.g. oscula closure, mucus production and tissue regression), although reduced lipids and deterioration of sponge health suggest that longer term exposure to similar conditions is likely to result in higher mortality. These results suggest that the combination of high SSCs and low light availability can accelerate mortality, increasing the probability of biological effects, although there is considerable interspecies variability in how adult sponges respond to dredging pressures.
NASA Astrophysics Data System (ADS)
Pietikäinen, Joni-Pekka; Markkanen, Tiina; Sieck, Kevin; Jacob, Daniela; Korhonen, Johanna; Räisänen, Petri; Gao, Yao; Ahola, Jaakko; Korhonen, Hannele; Laaksonen, Ari; Kaurola, Jussi
2018-04-01
The regional climate model REMO was coupled with the FLake lake model to include an interactive treatment of lakes. Using this new version, the Fenno-Scandinavian climate and lake characteristics were studied in a set of 35-year hindcast simulations. Additionally, sensitivity tests related to the parameterization of snow albedo were conducted. Our results show that overall the new model version improves the representation of the Fenno-Scandinavian climate in terms of 2 m temperature and precipitation, but the downside is that an existing wintertime cold bias in the model is enhanced. The lake surface water temperature, ice depth and ice season length were analyzed in detail for 10 Finnish, 4 Swedish and 2 Russian lakes and 1 Estonian lake. The results show that the model can reproduce these characteristics with reasonably high accuracy. The cold bias during winter causes overestimation of ice layer thickness, for example, at several of the studied lakes, but overall the values from the model are realistic and represent the lake physics well in a long-term simulation. We also analyzed the snow depth on ice from 10 Finnish lakes and vertical temperature profiles from 5 Finnish lakes and the model results are realistic.
Dissipative dark matter halos: The steady state solution
NASA Astrophysics Data System (ADS)
Foot, R.
2018-02-01
Dissipative dark matter, where dark matter particle properties closely resemble familiar baryonic matter, is considered. Mirror dark matter, which arises from an isomorphic hidden sector, is a specific and theoretically constrained scenario. Other possibilities include models with more generic hidden sectors that contain massless dark photons [unbroken U (1 ) gauge interactions]. Such dark matter not only features dissipative cooling processes but also is assumed to have nontrivial heating sourced by ordinary supernovae (facilitated by the kinetic mixing interaction). The dynamics of dissipative dark matter halos around rotationally supported galaxies, influenced by heating as well as cooling processes, can be modeled by fluid equations. For a sufficiently isolated galaxy with a stable star formation rate, the dissipative dark matter halos are expected to evolve to a steady state configuration which is in hydrostatic equilibrium and where heating and cooling rates locally balance. Here, we take into account the major cooling and heating processes, and numerically solve for the steady state solution under the assumptions of spherical symmetry, negligible dark magnetic fields, and that supernova sourced energy is transported to the halo via dark radiation. For the parameters considered, and assumptions made, we were unable to find a physically realistic solution for the constrained case of mirror dark matter halos. Halo cooling generally exceeds heating at realistic halo mass densities. This problem can be rectified in more generic dissipative dark matter models, and we discuss a specific example in some detail.
Development of an interpretive simulation tool for the proton radiography technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, M. C., E-mail: levymc@stanford.edu; Lawrence Livermore National Laboratory, Livermore, California 94551; Ryutov, D. D.
2015-03-15
Proton radiography is a useful diagnostic of high energy density (HED) plasmas under active theoretical and experimental development. In this paper, we describe a new simulation tool that interacts realistic laser-driven point-like proton sources with three dimensional electromagnetic fields of arbitrary strength and structure and synthesizes the associated high resolution proton radiograph. The present tool’s numerical approach captures all relevant physics effects, including effects related to the formation of caustics. Electromagnetic fields can be imported from particle-in-cell or hydrodynamic codes in a streamlined fashion, and a library of electromagnetic field “primitives” is also provided. This latter capability allows users tomore » add a primitive, modify the field strength, rotate a primitive, and so on, while quickly generating a high resolution radiograph at each step. In this way, our tool enables the user to deconstruct features in a radiograph and interpret them in connection to specific underlying electromagnetic field elements. We show an example application of the tool in connection to experimental observations of the Weibel instability in counterstreaming plasmas, using ∼10{sup 8} particles generated from a realistic laser-driven point-like proton source, imaging fields which cover volumes of ∼10 mm{sup 3}. Insights derived from this application show that the tool can support understanding of HED plasmas.« less
Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models
NASA Astrophysics Data System (ADS)
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-04-01
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.
Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-04-15
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simplemore » Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.« less
Performance Evaluation of 98 CZT Sensors for Their Use in Gamma-Ray Imaging
NASA Astrophysics Data System (ADS)
Dedek, Nicolas; Speller, Robert D.; Spendley, Paul; Horrocks, Julie A.
2008-10-01
98 SPEAR sensors from eV Products have been evaluated for their use in a portable Compton camera. The sensors have a 5 mm times 5 mm times 5 mm CdZnTe crystal and are provided together with a preamplifier. The energy resolution was studied in detail for all sensors and was found to be 6% on average at 59.5 keV and 3% on average at 662 keV. The standard deviations of the corresponding energy resolution distributions are remarkably small (0.6% at 59.5 keV, 0.7% at 662 keV) and reflect the uniformity of the sensor characteristics. For a possible outside use the temperature dependence of the sensor performances was investigated for temperatures between 15 and 45 deg Celsius. A linear shift in calibration with temperature was observed. The energy resolution at low energies (81 keV) was found to deteriorate exponentially with temperature, while it stayed constant at higher energies (356 keV). A Compton camera built of these sensors was simulated. To obtain realistic energy spectra a suitable detector response function was implemented. To investigate the angular resolution of the camera a 137Cs point source was simulated. Reconstructed images of the point source were compared for perfect and realistic energy and position resolutions. The angular resolution of the camera was found to be better than 10 deg.
Path integrals with higher order actions: Application to realistic chemical systems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.
2018-02-01
Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.
Semi-automatic Data Integration using Karma
NASA Astrophysics Data System (ADS)
Garijo, D.; Kejriwal, M.; Pierce, S. A.; Houser, P. I. Q.; Peckham, S. D.; Stanko, Z.; Hardesty Lewis, D.; Gil, Y.; Pennington, D. D.; Knoblock, C.
2017-12-01
Data integration applications are ubiquitous in scientific disciplines. A state-of-the-art data integration system accepts both a set of data sources and a target ontology as input, and semi-automatically maps the data sources in terms of concepts and relationships in the target ontology. Mappings can be both complex and highly domain-specific. Once such a semantic model, expressing the mapping using community-wide standard, is acquired, the source data can be stored in a single repository or database using the semantics of the target ontology. However, acquiring the mapping is a labor-prone process, and state-of-the-art artificial intelligence systems are unable to fully automate the process using heuristics and algorithms alone. Instead, a more realistic goal is to develop adaptive tools that minimize user feedback (e.g., by offering good mapping recommendations), while at the same time making it intuitive and easy for the user to both correct errors and to define complex mappings. We present Karma, a data integration system that has been developed over multiple years in the information integration group at the Information Sciences Institute, a research institute at the University of Southern California's Viterbi School of Engineering. Karma is a state-of-the-art data integration tool that supports an interactive graphical user interface, and has been featured in multiple domains over the last five years, including geospatial, biological, humanities and bibliographic applications. Karma allows a user to import their own ontology and datasets using widely used formats such as RDF, XML, CSV and JSON, can be set up either locally or on a server, supports a native backend database for prototyping queries, and can even be seamlessly integrated into external computational pipelines, including those ingesting data via streaming data sources, Web APIs and SQL databases. We illustrate a Karma workflow at a conceptual level, along with a live demo, and show use cases of Karma specifically for the geosciences. In particular, we show how Karma can be used intuitively to obtain the mapping model between case study data sources and a publicly available and expressive target ontology that has been designed to capture a broad set of concepts in geoscience with standardized, easily searchable names.
An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.
Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D
2016-05-01
Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be avoided. Data, extraction algorithms and evaluation routines were released as part of the fecgsyn toolbox on Physionet under an GNU GPL open-source license. This contribution provides a standard framework for benchmarking and regulatory testing of NI-FECG extraction algorithms.
NASA Technical Reports Server (NTRS)
Weaver, Clark J.; Douglass, Anne R.; Rood, Richard B.
1995-01-01
A three-dimensional transport model, which uses winds from a stratospheric data assimilation system, is used to study the transport of supersonic aircraft exhaust in the lower stratosphere. A passive tracer is continuously injected into the transport model. The tracer source distribution is based on realistic scenarios for the daily emission rate of reactive nitrogen species for all forecasted flight routes. Winds are from northern hemisphere winter/spring months for 1979 and 1989; there are minimal differences between the tracer integrations for the 2 years. During the integration, peak tracer mixing ratios in the flight corridors are compared with the zonal mean and found to be greater by a factor of 2 or less. This implies that the zonal mean assumption used in two dimensional models is reasonable during winter and spring. There is a preference for pollutant buildup in the heavily traveled North Pacific and North Atlantic flight corridors. Pollutant concentration in the corridors depends on the position of the Aleutian anticyclone and the northern hemisphere polar vortex edge.
FDTD-based Transcranial Magnetic Stimulation model applied to specific neurodegenerative disorders.
Fanjul-Vélez, Félix; Salas-García, Irene; Ortega-Quijano, Noé; Arce-Diego, José Luis
2015-01-01
Non-invasive treatment of neurodegenerative diseases is particularly challenging in Western countries, where the population age is increasing. In this work, magnetic propagation in human head is modelled by Finite-Difference Time-Domain (FDTD) method, taking into account specific characteristics of Transcranial Magnetic Stimulation (TMS) in neurodegenerative diseases. It uses a realistic high-resolution three-dimensional human head mesh. The numerical method is applied to the analysis of magnetic radiation distribution in the brain using two realistic magnetic source models: a circular coil and a figure-8 coil commonly employed in TMS. The complete model was applied to the study of magnetic stimulation in Alzheimer and Parkinson Diseases (AD, PD). The results show the electrical field distribution when magnetic stimulation is supplied to those brain areas of specific interest for each particular disease. Thereby the current approach entails a high potential for the establishment of the current underdeveloped TMS dosimetry in its emerging application to AD and PD. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Mspire-Simulator: LC-MS shotgun proteomic simulator for creating realistic gold standard data.
Noyce, Andrew B; Smith, Rob; Dalgleish, James; Taylor, Ryan M; Erb, K C; Okuda, Nozomu; Prince, John T
2013-12-06
The most important step in any quantitative proteomic pipeline is feature detection (aka peak picking). However, generating quality hand-annotated data sets to validate the algorithms, especially for lower abundance peaks, is nearly impossible. An alternative for creating gold standard data is to simulate it with features closely mimicking real data. We present Mspire-Simulator, a free, open-source shotgun proteomic simulator that goes beyond previous simulation attempts by generating LC-MS features with realistic m/z and intensity variance along with other noise components. It also includes machine-learned models for retention time and peak intensity prediction and a genetic algorithm to custom fit model parameters for experimental data sets. We show that these methods are applicable to data from three different mass spectrometers, including two fundamentally different types, and show visually and analytically that simulated peaks are nearly indistinguishable from actual data. Researchers can use simulated data to rigorously test quantitation software, and proteomic researchers may benefit from overlaying simulated data on actual data sets.