Sample records for critical source parameters

  1. Uncertainty quantification and propagation of errors of the Lennard-Jones 12-6 parameters for n-alkanes

    PubMed Central

    Knotts, Thomas A.

    2017-01-01

    Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455

  2. REVIEW OF INDOOR EMISSION SOURCE MODELS: PART 2. PARAMETER ESTIMATION

    EPA Science Inventory

    This review consists of two sections. Part I provides an overview of 46 indoor emission source models. Part 2 (this paper) focuses on parameter estimation, a topic that is critical to modelers but has never been systematically discussed. A perfectly valid model may not be a usefu...

  3. Volume-energy parameters for heat transfer to supercritical fluids

    NASA Technical Reports Server (NTRS)

    Kumakawa, A.; Niino, M.; Hendricks, R. C.; Giarratano, P. J.; Arp, V. D.

    1986-01-01

    Reduced Nusselt numbers of supercritical fluids from different sources were grouped by several volume-energy parameters. A modified bulk expansion parameter was introduced based on a comparative analysis of data scatter. Heat transfer experiments on liquefied methane were conducted under near-critical conditions in order to confirm the usefulness of the parameters. It was experimentally revealed that heat transfer characteristics of near-critical methane are similar to those of hydrogen. It was shown that the modified bulk expansion parameter and the Gibbs-energy parameter grouped the heat transfer data of hydrogen, oxygen and methane including the present data on near-critical methane. It was also indicated that the effects of surface roughness on heat transfer were very important in grouping the data of high Reynolds numbers.

  4. Experimental criticality specifications. An annotated bibliography through 1977

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paxton, H.C.

    1978-05-01

    The compilation of approximately 300 references gives sources of experimental criticality parameters of systems containing /sup 235/U, /sup 233/U, and /sup 239/Pu. The intent is to cover basic data for criticality safety applications. The references are arranged by subject.

  5. Estimating winter wheat phenological parameters: Implications for crop modeling

    USDA-ARS?s Scientific Manuscript database

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  6. REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, M. F.; Melatos, A.; Delaigle, A.

    2013-04-01

    We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by {approx}6% than the C-statistic alone under optimal conditionsmore » (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a {approx}> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.« less

  7. [Nitrogen non-point source pollution identification based on ArcSWAT in Changle River].

    PubMed

    Deng, Ou-Ping; Sun, Si-Yang; Lü, Jun

    2013-04-01

    The ArcSWAT (Soil and Water Assessment Tool) model was adopted for Non-point source (NPS) nitrogen pollution modeling and nitrogen source apportionment for the Changle River watershed, a typical agricultural watershed in Southeast China. Water quality and hydrological parameters were monitored, and the watershed natural conditions (including soil, climate, land use, etc) and pollution sources information were also investigated and collected for SWAT database. The ArcSWAT model was established in the Changle River after the calibrating and validating procedures of the model parameters. Based on the validated SWAT model, the contributions of different nitrogen sources to river TN loading were quantified, and spatial-temporal distributions of NPS nitrogen export to rivers were addressed. The results showed that in the Changle River watershed, Nitrogen fertilizer, nitrogen air deposition and nitrogen soil pool were the prominent pollution sources, which contributed 35%, 32% and 25% to the river TN loading, respectively. There were spatial-temporal variations in the critical sources for NPS TN export to the river. Natural sources, such as soil nitrogen pool and atmospheric nitrogen deposition, should be targeted as the critical sources for river TN pollution during the rainy seasons. Chemical nitrogen fertilizer application should be targeted as the critical sources for river TN pollution during the crop growing season. Chemical nitrogen fertilizer application, soil nitrogen pool and atmospheric nitrogen deposition were the main sources for TN exported from the garden plot, forest and residential land, respectively. However, they were the main sources for TN exported both from the upland and paddy field. These results revealed that NPS pollution controlling rules should focus on the spatio-temporal distribution of NPS pollution sources.

  8. A new qualitative acoustic emission parameter based on Shannon's entropy for damage monitoring

    NASA Astrophysics Data System (ADS)

    Chai, Mengyu; Zhang, Zaoxiao; Duan, Quan

    2018-02-01

    An important objective of acoustic emission (AE) non-destructive monitoring is to accurately identify approaching critical damage and to avoid premature failure by means of the evolutions of AE parameters. One major drawback of most parameters such as count and rise time is that they are strongly dependent on the threshold and other settings employed in AE data acquisition system. This may hinder the correct reflection of original waveform generated from AE sources and consequently bring difficulty for the accurate identification of the critical damage and early failure. In this investigation, a new qualitative AE parameter based on Shannon's entropy, i.e. AE entropy is proposed for damage monitoring. Since it derives from the uncertainty of amplitude distribution of each AE waveform, it is independent of the threshold and other time-driven parameters and can characterize the original micro-structural deformations. Fatigue crack growth test on CrMoV steel and three point bending test on a ductile material are conducted to validate the feasibility and effectiveness of the proposed parameter. The results show that the new parameter, compared to AE amplitude, is more effective in discriminating the different damage stages and identifying the critical damage.

  9. International Natural Gas Model 2011, Model Documentation Report

    EIA Publications

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  10. Real-time Forensic Disaster Analysis

    NASA Astrophysics Data System (ADS)

    Wenzel, F.; Daniell, J.; Khazai, B.; Mühr, B.; Kunz-Plapp, T.; Markus, M.; Vervaeck, A.

    2012-04-01

    The Center for Disaster Management and Risk Reduction Technology (CEDIM, www.cedim.de) - an interdisciplinary research center founded by the German Research Centre for Geoscience (GFZ) and Karlsruhe Institute of Technology (KIT) - has embarked on a new style of disaster research known as Forensic Disaster Analysis. The notion has been coined by the Integrated Research on Disaster Risk initiative (IRDR, www.irdrinternational.org) launched by ICSU in 2010. It has been defined as an approach to studying natural disasters that aims at uncovering the root causes of disasters through in-depth investigations that go beyond the reconnaissance reports and case studies typically conducted after disasters. In adopting this comprehensive understanding of disasters CEDIM adds a real-time component to the assessment and evaluation process. By comprehensive we mean that most if not all relevant aspects of disasters are considered and jointly analysed. This includes the impact (human, economy, and infrastructure), comparisons with recent historic events, social vulnerability, reconstruction and long-term impacts on livelihood issues. The forensic disaster analysis research mode is thus best characterized as "event-based research" through systematic investigation of critical issues arising after a disaster across various inter-related areas. The forensic approach requires (a) availability of global data bases regarding previous earthquake losses, socio-economic parameters, building stock information, etc.; (b) leveraging platforms such as the EERI clearing house, relief-web, and the many sources of local and international sources where information is organized; and (c) rapid access to critical information (e.g., crowd sourcing techniques) to improve our understanding of the complex dynamics of disasters. The main scientific questions being addressed are: What are critical factors that control loss of life, of infrastructure, and for economy? What are the critical interactions between hazard - socio-economic systems - technological systems? What were the protective measures and to what extent did they work? Can we predict pattern of losses and socio-economic implications for future extreme events from simple parameters: hazard parameters, historic evidence, socio-economic conditions? Can we predict implications for reconstruction from simple parameters: hazard parameters, historic evidence, socio-economic conditions? The M7.2 Van Earthquake (Eastern Turkey) of 23 Oct. 2011 serves as an example for a forensic approach.

  11. Simulation verification techniques study. Task report 4: Simulation module performance parameters and performance standards

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Shuttle simulation software modules in the environment, crew station, vehicle configuration and vehicle dynamics categories are discussed. For each software module covered, a description of the module functions and operational modes, its interfaces with other modules, its stored data, inputs, performance parameters and critical performance parameters is given. Reference data sources which provide standards of performance are identified for each module. Performance verification methods are also discussed briefly.

  12. Time delay of critical images of a point source near the gravitational lens fold-caustic

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Zhdanov, V.

    2016-06-01

    Within the framework of the analytical theory of the gravitational lensing we derive asymptotic formula for the time delay of critical images of apoint source, which is situated near a fold-caustic. We found corrections of the first and second order in powers of a parameter, which describescloseness of the source to the caustic. Our formula modifies earlier result by Congdon, Keeton &Nordgren (MNRAS, 2008) obtained in zero-orderapproximation. We have proved the hypothesis put forward by these authors that the first-order correction to the relative time delay of two criticalmages is identically zero. The contribution of the corrections is illustrated in model example by comparison with exact expression.

  13. Modeling of the dolphin's clicking sound source: The influence of the critical parameters

    NASA Astrophysics Data System (ADS)

    Dubrovsky, N. A.; Gladilin, A.; Møhl, B.; Wahlberg, M.

    2004-07-01

    A physical and a mathematical models of the dolphin’s source of echolocation clicks have been recently proposed. The physical model includes a bottle of pressurized air connected to the atmosphere with an underwater rubber tube. A compressing rubber ring is placed on the underwater portion of the tube. The ring blocks the air jet passing through the tube from the bottle. This ring can be brought into self-oscillation by the air jet. In the simplest case, the ring displacement follows a repeated triangular waveform. Because the acoustic pressure gradient is proportional to the second time derivative of the displacement, clicks arise at the bends of the displacement waveform. The mathematical model describes the dipole oscillations of a sphere “frozen” in the ring and calculates the waveform and the sound pressure of the generated clicks. The critical parameters of the mathematical model are the radius of the sphere and the peak value and duration of the triangular displacement curve. This model allows one to solve both the forward (deriving the properties of acoustic clicks from the known source parameters) and the inverse (calculating the source parameters from the acoustic data) problems. Data from click records of Odontocetes were used to derive both the displacement waveforms and the size of the “frozen sphere” or a structure functionally similar to it. The mathematical model predicts a maximum source level of up to 235 dB re 1 μPa at 1-m range when using a 5-cm radius of the “frozen” sphere and a 4-mm maximal displacement. The predicted sound pressure level is similar to that of the clicks produced by Odontocetest.

  14. Ion beam deposition of in situ superconducting Y-Ba-Cu-O films

    NASA Astrophysics Data System (ADS)

    Klein, J. D.; Yen, A.; Clauson, S. L.

    1990-01-01

    Oriented superconducting YBa2Cu3O7 thin films were deposited on yttria-stabilized zirconia substrates by ion beam sputtering of a nonstoichiometric oxide target. The films exhibited zero-resistance critical temperatures as high as 80.5 K without post-deposition anneals. Both the deposition rate and the c lattice parameter data displayed two distinct regimes of dependence on the beam power of the ion source. Low-power sputtering yielded films with large c dimensions and low Tc's. Higher power sputtering produced a continuous decrease in the c lattice parameter and an increase in critical temperatures.

  15. World Energy Projection System Plus Model Documentation: Coal Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  16. World Energy Projection System Plus Model Documentation: Transportation Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  17. World Energy Projection System Plus Model Documentation: Residential Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  18. World Energy Projection System Plus Model Documentation: Refinery Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  19. World Energy Projection System Plus Model Documentation: Main Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  20. World Energy Projection System Plus Model Documentation: Electricity Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  1. Identifying key sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2013-09-01

    This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. World Energy Projection System Plus Model Documentation: Greenhouse Gases Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  3. World Energy Projection System Plus Model Documentation: Natural Gas Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  4. World Energy Projection System Plus Model Documentation: District Heat Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  5. World Energy Projection System Plus Model Documentation: Industrial Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  6. Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Prix, R.

    2018-05-01

    Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.

  7. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    NASA Astrophysics Data System (ADS)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  8. OpenMC In Situ Source Convergence Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less

  9. Ion beam sputtering of in situ superconducting Y-Ba-Cu-O films

    NASA Astrophysics Data System (ADS)

    Klein, J. D.; Yen, A.; Clauson, S. L.

    1990-05-01

    Oriented superconducting YBa2Cu3O7 thin films were deposited on yttria stabilized zirconia and SrTiO3 substrates by ion-beam sputtering of a nonstoichiometric oxide target. The films exhibited zero-resistance critical temperatures as high as 83.5 K without post-deposition anneals. Both the deposition rate and the c-lattice parameter data displayed two distinct regimes of dependence on the beam power of the ion source. Low-power sputtering yielded films with large c-dimensions and low Tc. Higher-power sputtering produced a continuous decrease in the c-lattice parameter and increase in critical temperature. Films having the smaller c-lattice parameters were Cu rich. The Cu content of films deposited at beam voltages of 800 V and above increased with increasing beam power.

  10. X-33 Telemetry Best Source Selection, Processing, Display, and Simulation Model Comparison

    NASA Technical Reports Server (NTRS)

    Burkes, Darryl A.

    1998-01-01

    The X-33 program requires the use of multiple telemetry ground stations to cover the launch, ascent, transition, descent, and approach phases for the flights from Edwards AFB to landings at Dugway Proving Grounds, UT and Malmstrom AFB, MT. This paper will discuss the X-33 telemetry requirements and design, including information on fixed and mobile telemetry systems, best source selection, and support for Range Safety Officers. A best source selection system will be utilized to automatically determine the best source based on the frame synchronization status of the incoming telemetry streams. These systems will be used to select the best source at the landing sites and at NASA Dryden Flight Research Center to determine the overall best source between the launch site, intermediate sites, and landing site sources. The best source at the landing sites will be decommutated to display critical flight safety parameters for the Range Safety Officers. The overall best source will be sent to the Lockheed Martin's Operational Control Center at Edwards AFB for performance monitoring by X-33 program personnel and for monitoring of critical flight safety parameters by the primary Range Safety Officer. The real-time telemetry data (received signal strength, etc.) from each of the primary ground stations will also be compared during each nu'ssion with simulation data generated using the Dynamic Ground Station Analysis software program. An overall assessment of the accuracy of the model will occur after each mission. Acknowledgment: The work described in this paper was NASA supported through cooperative agreement NCC8-115 with Lockheed Martin Skunk Works.

  11. Using discharge data to reduce structural deficits in a hydrological model with a Bayesian inference approach and the implications for the prediction of critical source areas

    NASA Astrophysics Data System (ADS)

    Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.

    2011-12-01

    A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.

  12. Multiobjective Sensitivity Analysis Of Sediment And Nitrogen Processes With A Watershed Model

    EPA Science Inventory

    This paper presents a computational analysis for evaluating critical non-point-source sediment and nutrient (specifically nitrogen) processes and management actions at the watershed scale. In the analysis, model parameters that bear key uncertainties were presumed to reflect the ...

  13. Georgia Tech Studies of Sub-Critical Advanced Burner Reactors with a D-T Fusion Tokamak Neutron Source for the Transmutation of Spent Nuclear Fuel

    NASA Astrophysics Data System (ADS)

    Stacey, W. M.

    2009-09-01

    The possibility that a tokamak D-T fusion neutron source, based on ITER physics and technology, could be used to drive sub-critical, fast-spectrum nuclear reactors fueled with the transuranics (TRU) in spent nuclear fuel discharged from conventional nuclear reactors has been investigated at Georgia Tech in a series of studies which are summarized in this paper. It is found that sub-critical operation of such fast transmutation reactors is advantageous in allowing longer fuel residence time, hence greater TRU burnup between fuel reprocessing stages, and in allowing higher TRU loading without compromising safety, relative to what could be achieved in a similar critical transmutation reactor. The required plasma and fusion technology operating parameter range of the fusion neutron source is generally within the anticipated operational range of ITER. The implications of these results for fusion development policy, if they hold up under more extensive and detailed analysis, is that a D-T fusion tokamak neutron source for a sub-critical transmutation reactor, built on the basis of the ITER operating experience, could possibly be a logical next step after ITER on the path to fusion electrical power reactors. At the same time, such an application would allow fusion to contribute to meeting the nation's energy needs at an earlier stage by helping to close the fission reactor nuclear fuel cycle.

  14. Error tolerance analysis of wave diagnostic based on coherent modulation imaging in high power laser system

    NASA Astrophysics Data System (ADS)

    Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang

    2018-02-01

    Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.

  15. Assessment of the instantaneous unit hydrograph derived from the theory of topologically random networks

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1985-01-01

    An instantaneous unit hydrograph (iuh) based on the theory of topologically random networks (topological iuh) is evaluated in terms of sets of basin characteristics and hydraulic parameters. Hydrographs were computed using two linear routing methods for each of two drainage basins in the southeastern United States and are the basis of comparison for the topological iuh's. Elements in the sets of basin characteristics for the topological iuh's are the number of first-order streams only, (N), or the nuber of sources together with the number of channel links in the topological diameter (N, D); the hydraulic parameters are values of the celerity and diffusivity constant. Sensitivity analyses indicate that the mean celerity of the internal links in the network is the critical hydraulic parameter for determining the shape of the topological iuh, while the diffusivity constant has minimal effect on the topological iuh. Asymptotic results (source-only) indicate the number of sources need not be large to approximate the topological iuh with the Weibull probability density function.

  16. Investigation of chemical vapor deposition of garnet films for bubble domain memories

    NASA Technical Reports Server (NTRS)

    Besser, P. J.; Hamilton, T. N.

    1973-01-01

    The important process parameters and control required to grow reproducible device quality ferrimagnetic films by chemical vapor deposition (CVD) were studied. The investigation of the critical parameters in the CVD growth process led to the conclusion that the required reproducibility of film properties cannot be achieved with individually controlled separate metal halide sources. Therefore, the CVD growth effort was directed toward replacement of the halide sources with metallic sources with the ultimate goal being the reproducible growth of complex garnet compositions utilizing a single metal alloy source. The characterization of the YGdGaIG films showed that certain characteristics of this material, primarily the low domain wall energy and the large temperature sensitivity, severely limited its potential as a useful material for bubble domain devices. Consequently, at the time of the change from halide to metallic sources, the target film compositions were shifted to more useful materials such as YGdTmGaIG, YEuGaIG and YSmGaIG.

  17. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  18. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  19. Comparison of Demineralized Dentin and Demineralized Freeze Dried Bone as Carriers for Enamel Matrix Proteins in a Rat Critical Size Defect

    DTIC Science & Technology

    2005-05-01

    matrix derivative or connective tissue . Part 1: comparison of clinical parameters. J Periodontol 2003;74:1110-1125. Minabe M.: A critical review of the... connective tissue , both bone and PDL can serve as sources of progenitor cells for regeneration. Surgical techniques started to evolve with the knowledge...regeneration was Prichard in 1977. This technique involved removal of overlying gingival tissue leaving interdental bone denuded (Prichard 1977). In 1983

  20. Time delay of critical images in the vicinity of cusp point of gravitational-lens systems

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Zhdanov, V.

    2016-12-01

    We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.

  1. Parameter Optimization of PAL-XFEL Injector

    NASA Astrophysics Data System (ADS)

    Lee, Jaehyun; Ko, In Soo; Han, Jang-Hui; Hong, Juho; Yang, Haeryong; Min, Chang Ki; Kang, Heung-Sik

    2018-05-01

    A photoinjector is used as the electron source to generate a high peak current and low emittance beam for an X-ray free electron laser (FEL). The beam emittance is one of the critical parameters to determine the FEL performance together with the slice energy spread and the peak current. The Pohang Accelerator Laboratory X-ray Free Electron Laser (PAL-XFEL) was constructed in 2015, and the beam commissioning was carried out in spring 2016. The injector is running routinely for PAL-XFEL user operation. The operational parameters of the injector have been optimized experimentally, and these are somewhat different from the originally designed ones. Therefore, we study numerically the injector parameters based on the empirically optimized parameters and review the present operating condition.

  2. Enhancement of the in-field Jc of MgB2 via SiCl4 doping

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Lin; Dou, S. X.; Hossain, M. S. A.; Cheng, Z. X.; Liao, X. Z.; Ghorbani, S. R.; Yao, Q. W.; Kim, J. H.; Silver, T.

    2010-06-01

    We present the following results. (1) We introduce a doping source for MgB2 , liquid SiCl4 , which is free of C, to significantly enhance the irreversibility field (Hirr) , the upper critical field (Hc2) , and the critical current density (Jc) with a little reduction in the critical temperature (Tc) . (2) Although Si can not be incorporated into the crystal lattice, a significant reduction in the a -axis lattice parameter was found, to the same extent as for carbon doping. (3) Based on the first-principles calculation, it is found that it is reliable to estimate the C concentration just from the reduction in the a -lattice parameter for C-doped MgB2 polycrystalline samples that are prepared at high sintering temperatures, but not for those prepared at low sintering temperatures. Strain effects and magnesium deficiency might be reasons for the a -lattice reduction in non-C or some of the C-added MgB2 samples. (4) The SiCl4 -doped MgB2 shows much higher Jc with superior field dependence above 20 K compared to undoped MgB2 and MgB2 doped with various carbon sources. (5) We introduce a parameter, RHH (Hc2/Hirr) , which can clearly reflect the degree of flux-pinning enhancement, providing us with guidance for further enhancing Jc . (6) It was found that spatial variation in the charge-carrier mean free path is responsible for the flux-pinning mechanism in the SiCl4 treated MgB2 with large in-field Jc .

  3. Evaluation of light detector surface area for functional Near Infrared Spectroscopy.

    PubMed

    Wang, Lei; Ayaz, Hasan; Izzetoglu, Meltem; Onaral, Banu

    2017-10-01

    Functional Near Infrared Spectroscopy (fNIRS) is an emerging neuroimaging technique that utilizes near infrared light to detect cortical concentration changes of oxy-hemoglobin and deoxy-hemoglobin non-invasively. Using light sources and detectors over the scalp, multi-wavelength light intensities are recorded as time series and converted to concentration changes of hemoglobin via modified Beer-Lambert law. Here, we describe a potential source for systematic error in the calculation of hemoglobin changes and light intensity measurements. Previous system characterization and analysis studies looked into various fNIRS parameters such as type of light source, number and selection of wavelengths, distance between light source and detector. In this study, we have analyzed the contribution of light detector surface area to the overall outcome. Results from Monte Carlo based digital phantoms indicated that selection of detector area is a critical system parameter in minimizing the error in concentration calculations. The findings here can guide the design of future fNIRS sensors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Local Infrasound Variability Related to In Situ Atmospheric Observation

    NASA Astrophysics Data System (ADS)

    Kim, Keehoon; Rodgers, Arthur; Seastrand, Douglas

    2018-04-01

    Local infrasound is widely used to constrain source parameters of near-surface events (e.g., chemical explosions and volcanic eruptions). While atmospheric conditions are critical to infrasound propagation and source parameter inversion, local atmospheric variability is often ignored by assuming homogeneous atmospheres, and their impact on the source inversion uncertainty has never been accounted for due to the lack of quantitative understanding of infrasound variability. We investigate atmospheric impacts on local infrasound propagation by repeated explosion experiments with a dense acoustic network and in situ atmospheric measurement. We perform full 3-D waveform simulations with local atmospheric data and numerical weather forecast model to quantify atmosphere-dependent infrasound variability and address the advantage and restriction of local weather data/numerical weather model for sound propagation simulation. Numerical simulations with stochastic atmosphere models also showed nonnegligible influence of atmospheric heterogeneity on infrasound amplitude, suggesting an important role of local turbulence.

  5. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  6. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  7. A flexible model of foraging by a honey bee colony: the effects of individual behaviour on foraging success.

    PubMed

    Cox, Melissa D; Myerscough, Mary R

    2003-07-21

    This paper develops and explores a model of foraging in honey bee colonies. The model may be applied to forage sources with various properties, and to colonies with different foraging-related parameters. In particular, we examine the effect of five foraging-related parameters on the foraging response and consequent nectar intake of a homogeneous colony. The parameters investigated affect different quantities critical to the foraging cycle--visit rate (affected by g), probability of dancing (mpd and bpd), duration of dancing (mcirc), or probability of abandonment (A). We show that one parameter, A, affects nectar intake in a nonlinear way. Further, we show that colonies with a midrange value of any foraging parameter perform better than the average of colonies with high- and low-range values, when profitable sources are available. Together these observations suggest that a heterogeneous colony, in which a range of parameter values are present, may perform better than a homogeneous colony. We modify the model to represent heterogeneous colonies and use it to show that the most important effect of heterogeneous foraging behaviour within the colony is to reduce the variance in the average quantity of nectar collected by heterogeneous colonies.

  8. Do Open Source LMSs Support Personalization? A Comparative Evaluation

    NASA Astrophysics Data System (ADS)

    Kerkiri, Tania; Paleologou, Angela-Maria

    A number of parameters that support the LMSs capabilities towards content personalization are presented and substantiated. These parameters constitute critical criteria for an exhaustive investigation of the personalization capabilities of the most popular open source LMSs. Results are comparatively shown and commented upon, thus highlighting a course of conduct for the implementation of new personalization methodologies for these LMSs, aligned at their existing infrastructure, to maintain support of the numerous educational institutions entrusting major part of their curricula to them. Meanwhile, new capabilities arise as drawn from a more efficient description of the existing resources -especially when organized into widely available repositories- that lead to qualitatively advanced learner-oriented courses which would ideally meet the challenge of combining personification of demand and personalization of thematic content at once.

  9. [Real-time detection of quality of Chinese materia medica: strategy of NIR model evaluation].

    PubMed

    Wu, Zhi-sheng; Shi, Xin-yuan; Xu, Bing; Dai, Xing-xing; Qiao, Yan-jiang

    2015-07-01

    The definition of critical quality attributes of Chinese materia medica ( CMM) was put forward based on the top-level design concept. Nowadays, coupled with the development of rapid analytical science, rapid assessment of critical quality attributes of CMM was firstly carried out, which was the secondary discipline branch of CMM. Taking near infrared (NIR) spectroscopy as an example, which is a rapid analytical technology in pharmaceutical process over the past decade, systematic review is the chemometric parameters in NIR model evaluation. According to the characteristics of complexity of CMM and trace components analysis, a multi-source information fusion strategy of NIR model was developed for assessment of critical quality attributes of CMM. The strategy has provided guideline for NIR reliable analysis in critical quality attributes of CMM.

  10. Small lasers in flow cytometry.

    PubMed

    Telford, William G

    2004-01-01

    Laser technology has made tremendous advances in recent years, particularly in the area of diode and diode-pumped solid state sources. Flow cytometry has been a direct beneficiary of these advances, as these small, low-maintenance, inexpensive lasers with reasonable power outputs are integrated into flow cytometers. In this chapter we review the contribution and potential of solid-state lasers to flow cytometry, and show several examples of these novel sources integrated into production flow cytometers. Technical details and critical parameters for successful application of these lasers for biomedical analysis are reviewed.

  11. Ammonia concentrations at a site in Southern Scotland from 2 yr of continuous measurements

    NASA Astrophysics Data System (ADS)

    Burkhardt, J.; Sutton, M. A.; Milford, C.; Storeton-West, R. L.; Fowler, D.

    Atmospheric ammonia (NH 3) concentrations were measured using a continuous-flow annular denuder over a period of 2 yr at a rural site near Edinburgh, Scotland. Meteorological parameters as well as sulphur dioxide (SO 2) concentrations were also recorded. The overall arithmetic mean NH 3 concentration was 1.4 μg m -3. Although an annual cycle with largest NH 3 concentrations in summer was apparent for seasonal geometric mean concentrations, arithmetic mean concentrations were largest in the spring and autumn, indicating the increased importance of occasional high concentration events in these seasons. The NH 3 concentrations were influenced by local sources as well as by background concentrations, dependent on wind direction, whereas SO 2 geometric standard deviations indicated more distant sources. The daily cycle of NH 3 and SO 2 concentrations was dependent on wind speed ( u). At u<1 m s -1, NH 3 concentrations were smallest and SO 2 concentrations were largest around noon, whereas at u>1 m s -1 this cycle was less pronounced for both gases and NH 3 concentrations were largest around 1800 hours. These opposite diurnal cycles may be explained by the interaction of boundary layer mixing with local sources for NH 3 and remote sources for SO 2. Comparing the ammonia data with critical levels and critical loads shows that the critical level is not exceeded at this site over any averaging time. In contrast, the N critical load would probably be exceeded for moorland vegetation near this site, showing that the contribution of atmospheric NH 3 to nitrogen deposition in the long term is a more significant issue than exceedance of critical levels.

  12. Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation

    NASA Astrophysics Data System (ADS)

    Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.

    2002-05-01

    This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.

  13. New insights on active fault geometries in the Mentawai region of Sumatra, Indonesia, from broadband waveform modeling of earthquake source parameters

    NASA Astrophysics Data System (ADS)

    WANG, X.; Wei, S.; Bradley, K. E.

    2017-12-01

    Global earthquake catalogs provide important first-order constraints on the geometries of active faults. However, the accuracies of both locations and focal mechanisms in these catalogs are typically insufficient to resolve detailed fault geometries. This issue is particularly critical in subduction zones, where most great earthquakes occur. The Slab 1.0 model (Hayes et al. 2012), which was derived from global earthquake catalogs, has smooth fault geometries, and cannot adequately address local structural complexities that are critical for understanding earthquake rupture patterns, coseismic slip distributions, and geodetically monitored interseismic coupling. In this study, we conduct careful relocation and waveform modeling of earthquake source parameters to reveal fault geometries in greater detail. We take advantage of global data and conduct broadband waveform modeling for medium size earthquakes (M>4.5) to refine their source parameters, which include locations and fault plane solutions. The refined source parameters can greatly improve the imaging of fault geometry (e.g., Wang et al., 2017). We apply these approaches to earthquakes recorded since 1990 in the Mentawai region offshore of central Sumatra. Our results indicate that the uncertainty of the horizontal location, depth and dip angle estimation are as small as 5 km, 2 km and 5 degrees, respectively. The refined catalog shows that the 2005 and 2009 "back-thrust" sequences in Mentawai region actually occurred on a steeply landward-dipping fault, contradicting previous studies that inferred a seaward-dipping backthrust. We interpret these earthquakes as `unsticking' of the Sumatran accretionary wedge along a backstop fault that separates accreted material of the wedge from the strong Sunda lithosphere, or reactivation of an old normal fault buried beneath the forearc basin. We also find that the seismicity on the Sunda megathrust deviates in location from Slab 1.0 by up to 7 km, with along strike variation. The refined megathrust geometry will improve our understanding of the tectonic setting in this region, and place further constraints on rupture processes of the hazardous megathrust.

  14. Q ‑ Φ criticality and microstructure of charged AdS black holes in f(R) gravity

    NASA Astrophysics Data System (ADS)

    Deng, Gao-Ming; Huang, Yong-Chang

    2017-12-01

    The phase transition and critical behaviors of charged AdS black holes in f(R) gravity with a conformally invariant Maxwell (CIM) source and constant curvature are further investigated. As a highlight, this research is carried out by employing new state parameters (T,Q, Φ) and contributes to deeper understanding of the thermodynamics and phase structure of black holes. Our analyses manifest that the charged f(R)-CIM AdS black hole undergoes a first-order small-large black hole phase transition, and the critical behaviors qualitatively behave like a Van der Waals liquid-vapor system. However, differing from the case in Einstein’s gravity, phase structures of the black holes in f(R) theory exhibit an interesting dependence on gravity modification parameters. Moreover, we adopt the thermodynamic geometry to probe the black hole microscopic properties. The results show that, on the one hand, both the Ruppeiner curvature and heat capacity diverge exactly at the critical point, on the other hand, the f(R)-CIM AdS black hole possesses the property as ideal Fermi gases. Of special interest, we discover a microscopic similarity between the black holes and a Van der Waals liquid-vapor system.

  15. Method And Apparatus For Evaluatin Of High Temperature Superconductors

    DOEpatents

    Fishman, Ilya M.; Kino, Gordon S.

    1996-11-12

    A technique for evaluation of high-T.sub.c superconducting films and single crystals is based on measurement of temperature dependence of differential optical reflectivity of high-T.sub.c materials. In the claimed method, specific parameters of the superconducting transition such as the critical temperature, anisotropy of the differential optical reflectivity response, and the part of the optical losses related to sample quality are measured. The apparatus for performing this technique includes pump and probe sources, cooling means for sweeping sample temperature across the critical temperature and polarization controller for controlling a state of polarization of a probe light beam.

  16. Identification of Watershed-scale Critical Source Areas Using Bayesian Maximum Entropy Spatiotemporal Analysis

    NASA Astrophysics Data System (ADS)

    Roostaee, M.; Deng, Z.

    2017-12-01

    The states' environmental agencies are required by The Clean Water Act to assess all waterbodies and evaluate potential sources of impairments. Spatial and temporal distributions of water quality parameters are critical in identifying Critical Source Areas (CSAs). However, due to limitations in monetary resources and a large number of waterbodies, available monitoring stations are typically sparse with intermittent periods of data collection. Hence, scarcity of water quality data is a major obstacle in addressing sources of pollution through management strategies. In this study spatiotemporal Bayesian Maximum Entropy method (BME) is employed to model the inherent temporal and spatial variability of measured water quality indicators such as Dissolved Oxygen (DO) concentration for Turkey Creek Watershed. Turkey Creek is located in northern Louisiana and has been listed in 303(d) list for DO impairment since 2014 in Louisiana Water Quality Inventory Reports due to agricultural practices. BME method is proved to provide more accurate estimates than the methods of purely spatial analysis by incorporating space/time distribution and uncertainty in available measured soft and hard data. This model would be used to estimate DO concentration at unmonitored locations and times and subsequently identifying CSAs. The USDA's crop-specific land cover data layers of the watershed were then used to determine those practices/changes that led to low DO concentration in identified CSAs. Primary results revealed that cultivation of corn and soybean as well as urban runoff are main contributing sources in low dissolved oxygen in Turkey Creek Watershed.

  17. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  18. Density estimation in tiger populations: combining information for strong inference

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  19. Density estimation in tiger populations: combining information for strong inference.

    PubMed

    Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W

    2012-07-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  20. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  1. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons

    PubMed Central

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013

  2. Analysis of airframe/engine interactions - An integrated control perspective

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.; Schierman, John D.; Garg, Sanjay

    1990-01-01

    Techniques for the analysis of the dynamic interactions between airframe/engine dynamical systems are presented. Critical coupling terms are developed that determine the significance of these interactions with regard to the closed loop stability and performance of the feedback systems. A conceptual model is first used to indicate the potential sources of the coupling, how the coupling manifests itself, and how the magnitudes of these critical coupling terms are used to quantify the effects of the airframe/engine interactions. A case study is also presented involving an unstable airframe with thrust vectoring for attitude control. It is shown for this system with classical, decentralized control laws that there is little airframe/engine interaction, and the stability and performance with those control laws is not affected. Implications of parameter uncertainty in the coupling dynamics is also discussed, and effects of these parameter variations are also demonstrated to be small for this vehicle configuration.

  3. Wildland fire emissions, carbon, and climate: wildland fire detection and burned area in the United States

    Treesearch

    Wei Min Hao; Narasimhan K. Larkin

    2014-01-01

    Biomass burning is a major source of greenhouse gases, aerosols, black carbon, and atmospheric pollutants that affects regional and global climate and air quality. The spatial and temporal extent of fires and the size of burned areas are critical parameters in the estimation of fire emissions. Tremendous efforts have been made in the past 12 years to characterize the...

  4. Localization of transient gravitational wave sources: beyond triangulation

    NASA Astrophysics Data System (ADS)

    Fairhurst, Stephen

    2018-05-01

    Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.

  5. C-Depth Method to Determine Diffusion Coefficient and Partition Coefficient of PCB in Building Materials.

    PubMed

    Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping

    2015-10-20

    Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.

  6. Resonance of a fluid-driven crack: radiation properties and implications for the source of long-period events and harmonic tremor.

    USGS Publications Warehouse

    Chouet, B.

    1988-01-01

    A dynamic source model is presented, in which a 3-D crack containing a viscous compressible fluid is excited into resonance by an impulsive pressure transient applied over a small area DELTA S of the crack surface. The crack excitation depends critically on two dimensionless parameters called the crack stiffness and viscous damping loss. According to the model, the long-period event and harmonic tremor share the same source but differ in the boundary conditions for fluid flow and in the triggering mechanism setting up the resonance of the source, the former being viewed as the impulse response of the tremor generating system and the later representing the excitation due to more complex forcing functions.-from Author

  7. Self-organized criticality in a two-dimensional cellular automaton model of a magnetic flux tube with background flow

    NASA Astrophysics Data System (ADS)

    Dănilă, B.; Harko, T.; Mocanu, G.

    2015-11-01

    We investigate the transition to self-organized criticality in a two-dimensional model of a flux tube with a background flow. The magnetic induction equation, represented by a partial differential equation with a stochastic source term, is discretized and implemented on a two-dimensional cellular automaton. The energy released by the automaton during one relaxation event is the magnetic energy. As a result of the simulations, we obtain the time evolution of the energy release, of the system control parameter, of the event lifetime distribution and of the event size distribution, respectively, and we establish that a self-organized critical state is indeed reached by the system. Moreover, energetic initial impulses in the magnetohydrodynamic flow can lead to one-dimensional signatures in the magnetic two-dimensional system, once the self-organized critical regime is established. The applications of the model for the study of gamma-ray bursts (GRBs) is briefly considered, and it is shown that some astrophysical parameters of the bursts, like the light curves, the maximum released energy and the number of peaks in the light curve can be reproduced and explained, at least on a qualitative level, by working in a framework in which the systems settles in a self-organized critical state via magnetic reconnection processes in the magnetized GRB fireball.

  8. Modeling Source Water Threshold Exceedances with Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Rajagopalan, B.; Samson, C.; Summers, R. S.

    2016-12-01

    Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.

  9. Recording and quantification of ultrasonic echolocation clicks from free-ranging toothed whales

    NASA Astrophysics Data System (ADS)

    Madsen, P. T.; Wahlberg, M.

    2007-08-01

    Toothed whales produce short, ultrasonic clicks of high directionality and source level to probe their environment acoustically. This process, termed echolocation, is to a large part governed by the properties of the emitted clicks. Therefore derivation of click source parameters from free-ranging animals is of increasing importance to understand both how toothed whales use echolocation in the wild and how they may be monitored acoustically. This paper addresses how source parameters can be derived from free-ranging toothed whales in the wild using calibrated multi-hydrophone arrays and digital recorders. We outline the properties required of hydrophones, amplifiers and analog to digital converters, and discuss the problems of recording echolocation clicks on the axis of a directional sound beam. For accurate localization the hydrophone array apertures must be adapted and scaled to the behavior of, and the range to, the clicking animal, and precise information on hydrophone locations is critical. We provide examples of localization routines and outline sources of error that lead to uncertainties in localizing clicking animals in time and space. Furthermore we explore approaches to time series analysis of discrete versions of toothed whale clicks that are meaningful in a biosonar context.

  10. Power flow analysis of two coupled plates with arbitrary characteristics

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1990-01-01

    In the last progress report (Feb. 1988) some results were presented for a parametric analysis on the vibrational power flow between two coupled plate structures using the mobility power flow approach. The results reported then were for changes in the structural parameters of the two plates, but with the two plates identical in their structural characteristics. Herein, limitation is removed. The vibrational power input and output are evaluated for different values of the structural damping loss factor for the source and receiver plates. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. The results obtained from the mobility power flow approach are compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between the SEA results and the mobility power flow results. Furthermore, the benefits derived from using the mobility power flow approach are examined.

  11. A review of ADM1 extensions, applications, and analysis: 2002-2005.

    PubMed

    Batstone, D J; Keller, J; Steyer, J P

    2006-01-01

    Since publication of the Scientific and Technical Report (STR) describing the ADM1, the model has been extensively used, and analysed in both academic and practical applications. Adoption of the ADM1 in popular systems analysis tools such as the new wastewater benchmark (BSM2), and its use as a virtual industrial system can stimulate modelling of anaerobic processes by researchers and practitioners outside the core expertise of anaerobic processes. It has been used as a default structural element that allows researchers to concentrate on new extensions such as sulfate reduction, and new applications such as distributed parameter modelling of biofilms. The key limitations for anaerobic modelling originally identified in the STR were: (i) regulation of products from glucose fermentation, (ii) parameter values, and variability, and (iii) specific extensions. Parameter analysis has been widespread, and some detailed extensions have been developed (e.g., sulfate reduction). A verified extension that describes regulation of products from glucose fermentation is still limited, though there are promising fundamental approaches. This is a critical issue, given the current interest in renewable hydrogen production from carbohydrate-type waste. Critical analysis of the model has mainly focused on model structure reduction, hydrogen inhibition functions, and the default parameter set recommended in the STR. This default parameter set has largely been verified as a reasonable compromise, especially for wastewater sludge digestion. One criticism of note is that the ADM1 stoichiometry focuses on catabolism rather than anabolism. This means that inorganic carbon can be used unrealistically as a carbon source during some anabolic reactions. Advances and novel applications have also been made in the present issue, which focuses on the ADM1. These papers also explore a number of novel areas not originally envisaged in this review.

  12. Understanding critical factors for the quality and shelf-life of MAP fresh meat: a review.

    PubMed

    Singh, Preeti; Wani, Ali Abas; Saengerlaub, Sven; Langowski, Horst-Christian

    2011-02-01

    Due to increased demands for greater stringency in relation to hygiene and safety issues associated with fresh food products, coupled with ever-increasing demands by retailers for cost-effective extensions to product shelf-lives and the requirement to meet consumer expectations in relation to convenience and quality, the food packaging industry has rapidly developed to meet and satisfy expectations. One of the areas of research that has shown promise, and had success, is modified atmosphere packaging (MAP). The success of MAP-fresh meat depends on many factors including good initial product quality, good hygiene from the source plants, correct packaging material selection, the appropriate gas mix for the product, reliable packaging equipment, and maintenance of controlled temperatures and humidity levels. Advances in plastic materials and equipment have propelled advances in MAP, but other technological and logistical considerations are needed for successful MAP systems for raw chilled meat. Although several parameters critical for the quality of MA packed meat have been studied and each found to be crucial, understanding of the interactions between the parameters is needed. This review was undertaken to present the most comprehensive and current overview of the widely available, scattered information about the various integrated critical factors responsible for the quality and shelf life of MA packed meat with an interest to stimulate further research to optimize different quality parameters.

  13. Simulation verification techniques study. Subsystem simulation validation techniques

    NASA Technical Reports Server (NTRS)

    Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.

    1974-01-01

    Techniques for validation of software modules which simulate spacecraft onboard systems are discussed. An overview of the simulation software hierarchy for a shuttle mission simulator is provided. A set of guidelines for the identification of subsystem/module performance parameters and critical performance parameters are presented. Various sources of reference data to serve as standards of performance for simulation validation are identified. Environment, crew station, vehicle configuration, and vehicle dynamics simulation software are briefly discussed from the point of view of their interfaces with subsystem simulation modules. A detailed presentation of results in the area of vehicle subsystems simulation modules is included. A list of references, conclusions and recommendations are also given.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, David A.; Cournoyer, Michael E.; Merhege, James F.

    Criticality is the state of a nuclear chain reacting medium when the chain reaction is just self-sustaining (or critical). Criticality is dependent on nine interrelated parameters. Moreover, we design criticality safety controls in order to constrain these parameters to minimize fissions and maximize neutron leakage and absorption in other materials, which makes criticality more difficult or impossible to achieve. We present the consequences of criticality accidents are discussed, the nine interrelated parameters that combine to affect criticality are described, and criticality safety controls used to minimize the likelihood of a criticality accident are presented.

  15. How sensitive is earthquake ground motion to source parameters? Insights from a numerical study in the Mygdonian basin

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; deMartin, Florent; Hollender, Fabrice; Guyonnet-Benaize, Cédric; Manakou, Maria; Savvaidis, Alexandros; Kiratzi, Anastasia; Roumelioti, Zaferia; Theodoulidis, Nikos

    2014-05-01

    Understanding the origin of the variability of earthquake ground motion is critical for seismic hazard assessment. Here we present the results of a numerical analysis of the sensitivity of earthquake ground motion to seismic source parameters, focusing on the Mygdonian basin near Thessaloniki (Greece). We use an extended model of the basin (65 km [EW] x 50 km [NS]) which has been elaborated during the Euroseistest Verification and Validation Project. The numerical simulations are performed with two independent codes, both implementing the Spectral Element Method. They rely on a robust, semi-automated, mesh design strategy together with a simple homogenization procedure to define a smooth velocity model of the basin. Our simulations are accurate up to 4 Hz, and include the effects of surface topography and of intrinsic attenuation. Two kinds of simulations are performed: (1) direct simulations of the surface ground motion for real regional events having various back azimuth with respect to the center of the basin; (2) reciprocity-based calculations where the ground motion due to 980 different seismic sources is computed at a few stations in the basin. In the reciprocity-based calculations, we consider epicentral distances varying from 2.5 km to 40 km, source depths from 1 km to 15 km and we span the range of possible back-azimuths with a 10 degree bin. We will present some results showing (1) the sensitivity of ground motion parameters to the location and focal mechanism of the seismic sources; and (2) the variability of the amplification caused by site effects, as measured by standard spectral ratios, to the source characteristics

  16. A primer on criticality safety

    DOE PAGES

    Costa, David A.; Cournoyer, Michael E.; Merhege, James F.; ...

    2017-05-01

    Criticality is the state of a nuclear chain reacting medium when the chain reaction is just self-sustaining (or critical). Criticality is dependent on nine interrelated parameters. Moreover, we design criticality safety controls in order to constrain these parameters to minimize fissions and maximize neutron leakage and absorption in other materials, which makes criticality more difficult or impossible to achieve. We present the consequences of criticality accidents are discussed, the nine interrelated parameters that combine to affect criticality are described, and criticality safety controls used to minimize the likelihood of a criticality accident are presented.

  17. Operational resilience: concepts, design and analysis

    NASA Astrophysics Data System (ADS)

    Ganin, Alexander A.; Massaro, Emanuele; Gutfraind, Alexander; Steen, Nicolas; Keisler, Jeffrey M.; Kott, Alexander; Mangoubi, Rami; Linkov, Igor

    2016-01-01

    Building resilience into today’s complex infrastructures is critical to the daily functioning of society and its ability to withstand and recover from natural disasters, epidemics, and cyber-threats. This study proposes quantitative measures that capture and implement the definition of engineering resilience advanced by the National Academy of Sciences. The approach is applicable across physical, information, and social domains. It evaluates the critical functionality, defined as a performance function of time set by the stakeholders. Critical functionality is a source of valuable information, such as the integrated system resilience over a time interval, and its robustness. The paper demonstrates the formulation on two classes of models: 1) multi-level directed acyclic graphs, and 2) interdependent coupled networks. For both models synthetic case studies are used to explore trends. For the first class, the approach is also applied to the Linux operating system. Results indicate that desired resilience and robustness levels are achievable by trading off different design parameters, such as redundancy, node recovery time, and backup supply available. The nonlinear relationship between network parameters and resilience levels confirms the utility of the proposed approach, which is of benefit to analysts and designers of complex systems and networks.

  18. Operational resilience: concepts, design and analysis

    PubMed Central

    Ganin, Alexander A.; Massaro, Emanuele; Gutfraind, Alexander; Steen, Nicolas; Keisler, Jeffrey M.; Kott, Alexander; Mangoubi, Rami; Linkov, Igor

    2016-01-01

    Building resilience into today’s complex infrastructures is critical to the daily functioning of society and its ability to withstand and recover from natural disasters, epidemics, and cyber-threats. This study proposes quantitative measures that capture and implement the definition of engineering resilience advanced by the National Academy of Sciences. The approach is applicable across physical, information, and social domains. It evaluates the critical functionality, defined as a performance function of time set by the stakeholders. Critical functionality is a source of valuable information, such as the integrated system resilience over a time interval, and its robustness. The paper demonstrates the formulation on two classes of models: 1) multi-level directed acyclic graphs, and 2) interdependent coupled networks. For both models synthetic case studies are used to explore trends. For the first class, the approach is also applied to the Linux operating system. Results indicate that desired resilience and robustness levels are achievable by trading off different design parameters, such as redundancy, node recovery time, and backup supply available. The nonlinear relationship between network parameters and resilience levels confirms the utility of the proposed approach, which is of benefit to analysts and designers of complex systems and networks. PMID:26782180

  19. Operational resilience: concepts, design and analysis.

    PubMed

    Ganin, Alexander A; Massaro, Emanuele; Gutfraind, Alexander; Steen, Nicolas; Keisler, Jeffrey M; Kott, Alexander; Mangoubi, Rami; Linkov, Igor

    2016-01-19

    Building resilience into today's complex infrastructures is critical to the daily functioning of society and its ability to withstand and recover from natural disasters, epidemics, and cyber-threats. This study proposes quantitative measures that capture and implement the definition of engineering resilience advanced by the National Academy of Sciences. The approach is applicable across physical, information, and social domains. It evaluates the critical functionality, defined as a performance function of time set by the stakeholders. Critical functionality is a source of valuable information, such as the integrated system resilience over a time interval, and its robustness. The paper demonstrates the formulation on two classes of models: 1) multi-level directed acyclic graphs, and 2) interdependent coupled networks. For both models synthetic case studies are used to explore trends. For the first class, the approach is also applied to the Linux operating system. Results indicate that desired resilience and robustness levels are achievable by trading off different design parameters, such as redundancy, node recovery time, and backup supply available. The nonlinear relationship between network parameters and resilience levels confirms the utility of the proposed approach, which is of benefit to analysts and designers of complex systems and networks.

  20. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  1. Spatial distribution and source apportionment of water pollution in different administrative zones of Wen-Rui-Tang (WRT) river watershed, China.

    PubMed

    Yang, Liping; Mei, Kun; Liu, Xingmei; Wu, Laosheng; Zhang, Minghua; Xu, Jianming; Wang, Fan

    2013-08-01

    Water quality degradation in river systems has caused great concerns all over the world. Identifying the spatial distribution and sources of water pollutants is the very first step for efficient water quality management. A set of water samples collected bimonthly at 12 monitoring sites in 2009 and 2010 were analyzed to determine the spatial distribution of critical parameters and to apportion the sources of pollutants in Wen-Rui-Tang (WRT) river watershed, near the East China Sea. The 12 monitoring sites were divided into three administrative zones of urban, suburban, and rural zones considering differences in land use and population density. Multivariate statistical methods [one-way analysis of variance, principal component analysis (PCA), and absolute principal component score-multiple linear regression (APCS-MLR) methods] were used to investigate the spatial distribution of water quality and to apportion the pollution sources. Results showed that most water quality parameters had no significant difference between the urban and suburban zones, whereas these two zones showed worse water quality than the rural zone. Based on PCA and APCS-MLR analysis, urban domestic sewage and commercial/service pollution, suburban domestic sewage along with fluorine point source pollution, and agricultural nonpoint source pollution with rural domestic sewage pollution were identified to the main pollution sources in urban, suburban, and rural zones, respectively. Understanding the water pollution characteristics of different administrative zones could put insights into effective water management policy-making especially in the area across various administrative zones.

  2. Evaluation of the impact of sodium lauryl sulfate source variability on solid oral dosage form development.

    PubMed

    Qiang, Dongmei; Gunn, Jocelyn A; Schultz, Leon; Li, Z Jane

    2010-12-01

    The objective of this study was to investigate the effects of sodium lauryl sulfate (SLS) from different sources on solubilization/wetting, granulation process, and tablet dissolution of BILR 355 and the potential causes. The particle size distribution, morphology, and thermal behaviors of two pharmaceutical grades of SLS from Spectrum and Cognis were characterized. The surface tension and drug solubility in SLS solutions were measured. The BILR 355 tablets were prepared by a wet granulation process and the dissolution was evaluated. The critical micelle concentration was lower for Spectrum SLS, which resulted in a higher BILR 355 solubility. During wet granulation, less water was required to reach the same end point using Spectrum than Cognis SLS. In general, BILR 355 tablets prepared with Spectrum SLS showed a higher dissolution than the tablets containing Cognis SLS. Micronization of SLS achieved the same improved tablet dissolution as micronized active pharmaceutical ingredient. The observed differences in wetting and solubilization were likely due to the different impurity levels in SLS from two sources. This study demonstrated that SLS from different sources could have significant impact on wet granulation process and dissolution. Therefore, it is critical to evaluate SLS properties from different suppliers, and then identify optimal formulation and process parameters to ensure robustness of drug product manufacture process and performance.

  3. Development and Performance of a Filter Radiometer Monitor System for Integrating Sphere Sources

    NASA Technical Reports Server (NTRS)

    Ding, Leibo; Kowalewski, Matthew G.; Cooper, John W.; Smith, GIlbert R.; Barnes, Robert A.; Waluschka, Eugene; Butler, James J.

    2011-01-01

    The NASA Goddard Space Flight Center (GSFC) Radiometric Calibration Laboratory (RCL) maintains several large integrating sphere sources covering the visible to the shortwave infrared wavelength range. Two critical, functional requirements of an integrating sphere source are short and long-term operational stability and repeatability. Monitoring the source is essential in determining the origin of systemic errors, thus increasing confidence in source performance and quantifying repeatability. If monitor data falls outside the established parameters, this could be an indication that the source requires maintenance or re-calibration against the National Institute of Science and Technology (NIST) irradiance standard. The GSFC RCL has developed a Filter Radiometer Monitoring System (FRMS) to continuously monitor the performance of its integrating sphere calibration sources in the 400 to 2400nm region. Sphere output change mechanisms include lamp aging, coating (e.g. BaSO4) deterioration, and ambient water vapor level. The Filter Radiometer Monitor System (FRMS) wavelength bands are selected to quantify changes caused by these mechanisms. The FRMS design and operation are presented, as well as data from monitoring four of the RCL s integrating sphere sources.

  4. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    NASA Astrophysics Data System (ADS)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems, particularly at the laboratory scale.

  5. Kantowski-Sachs Einstein-æther perfect fluid models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latta, Joey; Leon, Genly; Paliathanasis, Andronikos, E-mail: lattaj@mathstat.dal.ca, E-mail: genly.leon@pucv.cl, E-mail: anpaliat@phys.uoa.gr

    We investigate Kantowski-Sachs models in Einstein-æ ther theory with a perfect fluid source using the singularity analysis to prove the integrability of the field equations and dynamical system tools to study the evolution. We find an inflationary source at early times, and an inflationary sink at late times, for a wide region in the parameter space. The results by A.A. Coley, G. Leon, P. Sandin and J. Latta ( JCAP 12 (2015) 010), are then re-obtained as particular cases. Additionally, we select other values for the non-GR parameters which are consistent with current constraints, getting a very rich phenomenology. Inmore » particular, we find solutions with infinite shear, zero curvature, and infinite matter energy density in comparison with the Hubble scalar. We also have stiff-like future attractors, anisotropic late-time attractors, or both, in some special cases. Such results are developed analytically, and then verified by numerics. Finally, the physical interpretation of the new critical points is discussed.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayat, T.; Nonlinear Analysis and Applied Mathematics; Muhammad, Taseer

    Development of human society greatly depends upon solar energy. Heat, electricity and water from nature can be obtained through solar power. Sustainable energy generation at present is a critical issue in human society development. Solar energy is regarded one of the best sources of renewable energy. Hence the purpose of present study is to construct a model for radiative effects in three-dimensional of nanofluid. Flow of second grade fluid by an exponentially stretching surface is considered. Thermophoresis and Brownian motion effects are taken into account in presence of heat source/sink and chemical reaction. Results are derived for the dimensionless velocities,more » temperature and concentration. Graphs are plotted to examine the impacts of physical parameters on the temperature and concentration. Numerical computations are presented to examine the values of skin-friction coefficients, Nusselt and Sherwood numbers. It is observed that the values of skin-friction coefficients are more for larger values of second grade parameter. Moreover the radiative effects on the temperature and concentration are quite reverse.« less

  7. Investigation of kinetics of MOCVD systems

    NASA Astrophysics Data System (ADS)

    Anderson, Timothy J.

    1991-12-01

    Several issues related to epitaxy of III-V semiconductors by hydride VPE and MOCVD were investigated. A complex chemical equilibrium analysis was performed in order to investigate the controllability of hydride VPE. The critical control parameters for the deposition of InGaAsP Lattice matched to InP are deposition temperature, system pressure, Group III Molar Ratio, Group V Molar Ratio. An experimental characterization of the Ga and In source reactors was accomplished. A MOCVD System was constructed for the deposition of AlGaAs. An investigation was performed to determine the controlling parameters of laser-enhanced deposition of GaAs and AlGaAs using an argon ion laser. Enhancement of deposition was observed when the system was operated in the reaction limited regime. The use of a Ga/In alloy source was studied for the deposition of GaInAs by the Hydride method. The system was used to produce state-of-the-art P-I-N photo-detectors.

  8. Time-resolved brightness measurements by streaking

    NASA Astrophysics Data System (ADS)

    Torrance, Joshua S.; Speirs, Rory W.; McCulloch, Andrew J.; Scholten, Robert E.

    2018-03-01

    Brightness is a key figure of merit for charged particle beams, and time-resolved brightness measurements can elucidate the processes involved in beam creation and manipulation. Here we report on a simple, robust, and widely applicable method for the measurement of beam brightness with temporal resolution by streaking one-dimensional pepperpots, and demonstrate the technique to characterize electron bunches produced from a cold-atom electron source. We demonstrate brightness measurements with 145 ps temporal resolution and a minimum resolvable emittance of 40 nm rad. This technique provides an efficient method of exploring source parameters and will prove useful for examining the efficacy of techniques to counter space-charge expansion, a critical hurdle to achieving single-shot imaging of atomic scale targets.

  9. Comparison of different wavelength pump sources for Tm subnanosecond amplifier

    NASA Astrophysics Data System (ADS)

    Cserteg, Andras; Guillemet, Sébastien; Hernandez, Yves; Giannone, Domenico

    2012-06-01

    We report here a comparison of different pumping wavelengths for short pulse Thulium fibre amplifiers. We compare the results in terms of efficiency and required fibre length. As we operate the laser in the sub-nanosecond regime, the fibre length is a critical parameter regarding non linear effects. With 793 nm clad-pumping, a 4 m long active fibre was necessary, leading to strong spectral deformation through Self Phase Modulation (SPM). Core-pumping scheme was then more in-depth investigated with several wavelengths tested. Good results with Erbium and Raman shifted pumping sources were obtained, with very short fibre length, aiming to reach a few micro-joules per pulse without (or with limited) SPM.

  10. Macromolecular refinement by model morphing using non-atomic parameterizations.

    PubMed

    Cowtan, Kevin; Agirre, Jon

    2018-02-01

    Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.

  11. Operation of large RF sources for H-: Lessons learned at ELISE

    NASA Astrophysics Data System (ADS)

    Fantz, U.; Wünderlich, D.; Heinemann, B.; Kraus, W.; Riedl, R.

    2017-08-01

    The goal of the ELISE test facility is to demonstrate that large RF-driven negative ion sources (1 × 1 m2 source area with 360 kW installed RF power) can achieve the parameters required for the ITER beam sources in terms of current densities and beam homogeneity at a filling pressure of 0.3 Pa for pulse lengths of up to one hour. With the experience in operation of the test facility, the beam source inspection and maintenance as well as with the results of the achieved source performance so far, conclusions are drawn for commissioning and operation of the ITER beam sources. Addressed are critical technical RF issues, extrapolations to the required RF power, Cs consumption and Cs ovens, the need of adjusting the magnetic filter field strength as well as the temporal dynamic and spatial asymmetry of the co-extracted electron current. It is proposed to relax the low pressure limit to 0.4 Pa and to replace the fixed electron-to-ion ratio by a power density limit for the extraction grid. This would be highly beneficial for controlling the co-extracted electrons.

  12. UV fatigue investigations with non-destructive tools in silica

    NASA Astrophysics Data System (ADS)

    Natoli, Jean-Yves; Beaudier, Alexandre; Wagner, Frank R.

    2017-08-01

    A fatigue effect is often observed under multiple laser irradiations, overall in UV. This decrease of LIDT, is a critical parameter for laser sources with high repetition rates and with a need of long-term life, as in spatial applications at 355nm. A challenge is also to replace excimer lasers by solid laser sources, this challenge requires to improve drastically the lifetime of optical materials at 266nm. Main applications of these sources are devoted to material surface nanostructuration, spectroscopy and medical surgeries. In this work we focus on the understanding of the laser matter interaction at 266nm in silica in order to predict the lifetime of components and study parameters links to these lifetimes to give keys of improvement for material suppliers. In order to study the mechanism involved in the case of multiple irradiations, an interesting approach is to involve the evolution of fluorescence, in order to observe the first stages of material changes just before breakdown. We will show that it is sometime possible to estimate the lifetime of component only with the fluorescence measurement, saving time and materials. Moreover, the data from the diagnostics give relevant informations to highlight "defects" induced by multiple laser irradiations.

  13. The critical angle in seismic interferometry

    USGS Publications Warehouse

    Van Wijk, K.; Calvert, A.; Haney, M.; Mikesell, D.; Snieder, R.

    2008-01-01

    Limitations with respect to the characteristics and distribution of sources are inherent to any field seismic experiment, but in seismic interferometry these lead to spurious waves. Instead of trying to eliminate, filter or otherwise suppress spurious waves, crosscorrelation of receivers in a refraction experiment indicate we can take advantage of spurious events for near-surface parameter extraction for static corrections or near-surface imaging. We illustrate this with numerical examples and a field experiment from the CSM/Boise State University Geophysics Field Camp.

  14. The Effects of Channel Curvature and Protrusion Height on Nucleate Boiling and the Critical Heat Flux of a Simulated Electronic Chip

    DTIC Science & Technology

    1994-05-01

    parameters and geometry factor. 57 3.2 Laminar sublayer and buffer layer thicknesses for geometry of Mudawar and Maddox.ŝ 68 3.3 Correlation constants...transfer from simulated electronic chip heat sources that are flush with the flow channel wall. Mudawar and Maddox2" have studied enhanced surfaces...bias error was not estimated; however, the percentage of heat loss measured compares with that previously reported by Mudawar and Maddox19 for a

  15. On butterfly effect in higher derivative gravities

    NASA Astrophysics Data System (ADS)

    Alishahiha, Mohsen; Davody, Ali; Naseh, Ali; Taghavi, Seyed Farid

    2016-11-01

    We study butterfly effect in D-dimensional gravitational theories containing terms quadratic in Ricci scalar and Ricci tensor. One observes that due to higher order derivatives in the corresponding equations of motion there are two butterfly velocities. The velocities are determined by the dimension of operators whose sources are provided by the metric. The three dimensional TMG model is also studied where we get two butterfly velocities at generic point of the moduli space of parameters. At critical point two velocities coincide.

  16. [Optimize dropping process of Ginkgo biloba dropping pills by using design space approach].

    PubMed

    Shen, Ji-Chen; Wang, Qing-Qing; Chen, An; Pan, Fang-Lai; Gong, Xing-Chu; Qu, Hai-Bin

    2017-07-01

    In this paper, a design space approach was applied to optimize the dropping process of Ginkgo biloba dropping pills. Firstly, potential critical process parameters and potential process critical quality attributes were determined through literature research and pre-experiments. Secondly, experiments were carried out according to Box-Behnken design. Then the critical process parameters and critical quality attributes were determined based on the experimental results. Thirdly, second-order polynomial models were used to describe the quantitative relationships between critical process parameters and critical quality attributes. Finally, a probability-based design space was calculated and verified. The verification results showed that efficient production of Ginkgo biloba dropping pills can be guaranteed by operating within the design space parameters. The recommended operation ranges for the critical dropping process parameters of Ginkgo biloba dropping pills were as follows: dropping distance of 5.5-6.7 cm, and dropping speed of 59-60 drops per minute, providing a reference for industrial production of Ginkgo biloba dropping pills. Copyright© by the Chinese Pharmaceutical Association.

  17. A generic open-source software framework supporting scenario simulations in bioterrorist crises.

    PubMed

    Falenski, Alexander; Filter, Matthias; Thöns, Christian; Weiser, Armin A; Wigger, Jan-Frederik; Davis, Matthew; Douglas, Judith V; Edlund, Stefan; Hu, Kun; Kaufman, James H; Appel, Bernd; Käsbohrer, Annemarie

    2013-09-01

    Since the 2001 anthrax attack in the United States, awareness of threats originating from bioterrorism has grown. This led internationally to increased research efforts to improve knowledge of and approaches to protecting human and animal populations against the threat from such attacks. A collaborative effort in this context is the extension of the open-source Spatiotemporal Epidemiological Modeler (STEM) simulation and modeling software for agro- or bioterrorist crisis scenarios. STEM, originally designed to enable community-driven public health disease models and simulations, was extended with new features that enable integration of proprietary data as well as visualization of agent spread along supply and production chains. STEM now provides a fully developed open-source software infrastructure supporting critical modeling tasks such as ad hoc model generation, parameter estimation, simulation of scenario evolution, estimation of effects of mitigation or management measures, and documentation. This open-source software resource can be used free of charge. Additionally, STEM provides critical features like built-in worldwide data on administrative boundaries, transportation networks, or environmental conditions (eg, rainfall, temperature, elevation, vegetation). Users can easily combine their own confidential data with built-in public data to create customized models of desired resolution. STEM also supports collaborative and joint efforts in crisis situations by extended import and export functionalities. In this article we demonstrate specifically those new software features implemented to accomplish STEM application in agro- or bioterrorist crisis scenarios.

  18. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  19. Temperature dependent DC characterization of InAlN/(AlN)/GaN HEMT for improved reliability

    NASA Astrophysics Data System (ADS)

    Takhar, K.; Gomes, U. P.; Ranjan, K.; Rathi, S.; Biswas, D.

    2015-02-01

    InxAl1-xN/AlN/GaN HEMT device performance is analysed at various temperatures with the help of physics based 2-D simulation using commercially available BLAZE and GIGA modules from SILVACO. Various material parameters viz. band-gap, low field mobility, density of states, velocity saturation, and substrate thermal conductivity are considered as critical parameters for predicting temperature effect in InxAl1-xN/AlN/GaN HEMT. Reduction in drain current and transconductance has been observed due to the decrease of 2-DEG mobility and effective electron velocity with the increase in temperature. Degradation in cut-off frequency follows the transconductance profile as variation in gate-source/gate-drain capacitances observed very small.

  20. Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6

    DOE PAGES

    Kulesza, Joel A.; Martz, Roger Lee

    2017-03-01

    Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less

  1. A mathematical model for mixed convective flow of chemically reactive Oldroyd-B fluid between isothermal stretching disks

    NASA Astrophysics Data System (ADS)

    Hashmi, M. S.; Khan, N.; Ullah Khan, Sami; Rashidi, M. M.

    In this study, we have constructed a mathematical model to investigate the heat source/sink effects in mixed convection axisymmetric flow of an incompressible, electrically conducting Oldroyd-B fluid between two infinite isothermal stretching disks. The effects of viscous dissipation and Joule heating are also considered in the heat equation. The governing partial differential equations are converted into ordinary differential equations by using appropriate similarity variables. The series solution of these dimensionless equations is constructed by using homotopy analysis method. The convergence of the obtained solution is carefully examined. The effects of various involved parameters on pressure, velocity and temperature profiles are comprehensively studied. A graphical analysis has been presented for various values of problem parameters. The numerical values of wall shear stress and Nusselt number are computed at both upper and lower disks. Moreover, a graphical and tabular explanation for critical values of Frank-Kamenetskii regarding other flow parameters.

  2. Critical behavior near the ferromagnetic phase transition in double perovskite Nd2NiMnO6

    NASA Astrophysics Data System (ADS)

    Ali, Anzar; Sharma, G.; Singh, Yogesh

    2018-05-01

    The knowledge of critical exponents plays a crucial role in trying to understand the interaction mechanism near a phase transition. In this report, we present a detailed study of the critical behaviour near the ferromagnetic (FM) transition (TC ˜ 193 K) in Nd2NiMnO6 using the temperature and magnetic field dependent isothermal magnetisation measurements. We used various analysis methods such as Arrott plot, modified Arrott plot, and Kouvel-Fisher plot to estimate the critical parameters. The magnetic critical parameters β = 0.49±0.02, γ = 1.05±0.04 and critical isothermal parameter δ = 3.05±0.02 are in excellent agreement with Widom scaling. The critical parameters analysis emphasizes that mean field interaction is the mechanism driving the FM transition in Nd2NiMnO6.

  3. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  4. Properties of two-temperature dissipative accretion flow around black holes

    NASA Astrophysics Data System (ADS)

    Dihingia, Indu K.; Das, Santabrata; Mandal, Samir

    2018-04-01

    We study the properties of two-temperature accretion flow around a non-rotating black hole in presence of various dissipative processes where pseudo-Newtonian potential is adopted to mimic the effect of general relativity. The flow encounters energy loss by means of radiative processes acted on the electrons and at the same time, flow heats up as a consequence of viscous heating effective on ions. We assumed that the flow is exposed with the stochastic magnetic fields that leads to Synchrotron emission of electrons and these emissions are further strengthen by Compton scattering. We obtain the two-temperature global accretion solutions in terms of dissipation parameters, namely, viscosity (α) and accretion rate ({\\dot{m}}), and find for the first time in the literature that such solutions may contain standing shock waves. Solutions of this kind are multitransonic in nature, as they simultaneously pass through both inner critical point (xin) and outer critical point (xout) before crossing the black hole horizon. We calculate the properties of shock-induced global accretion solutions in terms of the flow parameters. We further show that two-temperature shocked accretion flow is not a discrete solution, instead such solution exists for wide range of flow parameters. We identify the effective domain of the parameter space for standing shock and observe that parameter space shrinks as the dissipation is increased. Since the post-shock region is hotter due to the effect of shock compression, it naturally emits hard X-rays, and therefore, the two-temperature shocked accretion solution has the potential to explain the spectral properties of the black hole sources.

  5. Biological Synthesis of Nanoparticles from Plants and Microorganisms.

    PubMed

    Singh, Priyanka; Kim, Yu-Jin; Zhang, Dabing; Yang, Deok-Chun

    2016-07-01

    Nanotechnology has become one of the most promising technologies applied in all areas of science. Metal nanoparticles produced by nanotechnology have received global attention due to their extensive applications in the biomedical and physiochemical fields. Recently, synthesizing metal nanoparticles using microorganisms and plants has been extensively studied and has been recognized as a green and efficient way for further exploiting microorganisms as convenient nanofactories. Here, we explore and detail the potential uses of various biological sources for nanoparticle synthesis and the application of those nanoparticles. Furthermore, we highlight recent milestones achieved for the biogenic synthesis of nanoparticles by controlling critical parameters, including the choice of biological source, incubation period, pH, and temperature. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe

    NASA Technical Reports Server (NTRS)

    Isaacson, Jeffrey A.; Canizares, Claude R.

    1989-01-01

    Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.

  7. A SPME-based method for rapidly and accurately measuring the characteristic parameter for DEHP emitted from PVC floorings.

    PubMed

    Cao, J; Zhang, X; Little, J C; Zhang, Y

    2017-03-01

    Semivolatile organic compounds (SVOCs) are present in many indoor materials. SVOC emissions can be characterized with a critical parameter, y 0 , the gas-phase SVOC concentration in equilibrium with the source material. To reduce the required time and improve the accuracy of existing methods for measuring y 0 , we developed a new method which uses solid-phase microextraction (SPME) to measure the concentration of an SVOC emitted by source material placed in a sealed chamber. Taking one typical indoor SVOC, di-(2-ethylhexyl) phthalate (DEHP), as the example, the experimental time was shortened from several days (even several months) to about 1 day, with relative errors of less than 5%. The measured y 0 values agree well with the results obtained by independent methods. The saturated gas-phase concentration (y sat ) of DEHP was also measured. Based on the Clausius-Clapeyron equation, a correlation that reveals the effects of temperature, the mass fraction of DEHP in the source material, and y sat on y 0 was established. The proposed method together with the correlation should be useful in estimating and controlling human exposure to indoor DEHP. The applicability of the present approach for other SVOCs and other SVOC source materials requires further study. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Effects of source spatial partial coherence on temporal fade statistics of irradiance flux in free-space optical links through atmospheric turbulence.

    PubMed

    Chen, Chunyi; Yang, Huamin; Zhou, Zhou; Zhang, Weizhi; Kavehrad, Mohsen; Tong, Shoufeng; Wang, Tianshu

    2013-12-02

    The temporal covariance function of irradiance-flux fluctua-tions for Gaussian Schell-model (GSM) beams propagating in atmospheric turbulence is theoretically formulated by making use of the method of effective beam parameters. Based on this formulation, new expressions for the root-mean-square (RMS) bandwidth of the irradiance-flux temporal spectrum due to GSM beams passing through atmospheric turbulence are derived. With the help of these expressions, the temporal fade statistics of the irradiance flux in free-space optical (FSO) communication systems, using spatially partially coherent sources, impaired by atmospheric turbulence are further calculated. Results show that with a given receiver aperture size, the use of a spatially partially coherent source can reduce both the fractional fade time and average fade duration of the received light signal; however, when atmospheric turbulence grows strong, the reduction in the fractional fade time becomes insignificant for both large and small receiver apertures and in the average fade duration turns inconsiderable for small receiver apertures. It is also illustrated that if the receiver aperture size is fixed, changing the transverse correlation length of the source from a larger value to a smaller one can reduce the average fade frequency of the received light signal only when a threshold parameter in decibels greater than the critical threshold level is specified.

  9. Neutron radiography in Indian space programme

    NASA Astrophysics Data System (ADS)

    Viswanathan, K.

    1999-11-01

    Pyrotechnic devices are indispensable in any space programme to perform such critical operations as ignition, stage separation, solar panel deployment, etc. The nature of design and configuration of different types of pyrotechnic devices, and the type of materials that are put in their construction make the inspection of them with thermal neutrons more favourable than any other non destructive testing methods. Although many types of neutron sources are available for use, generally the radiographic quality/exposure duration and cost of source run in opposite directions even after four decades of research and development. But in the area of space activity, by suitably combining the X-ray and neutron radiographic requirements, the inspection of the components can be made economically viable. This is demonstrated in the Indian space programme by establishing a 15 MeV linear accelerator based neutron generator facility to inspect medium to giant solid propellant boosters by X-ray inspection and all types of critical pyro and some electronic components by neutron radiography. Since the beam contains unacceptable gamma, transfer imaging technique has been evolved and the various parameters have been optimised to get a good quality image.

  10. Moisture parameters and fungal communities associated with gypsum drywall in buildings.

    PubMed

    Dedesko, Sandra; Siegel, Jeffrey A

    2015-12-08

    Uncontrolled excess moisture in buildings is a common problem that can lead to changes in fungal communities. In buildings, moisture parameters can be classified by location and include assessments of moisture in the air, at a surface, or within a material. These parameters are not equivalent in dynamic indoor environments, which makes moisture-induced fungal growth in buildings a complex occurrence. In order to determine the circumstances that lead to such growth, it is essential to have a thorough understanding of in situ moisture measurement, the influence of building factors on moisture parameters, and the levels of these moisture parameters that lead to indoor fungal growth. Currently, there are disagreements in the literature on this topic. A literature review was conducted specifically on moisture-induced fungal growth on gypsum drywall. This review revealed that there is no consistent measurement approach used to characterize moisture in laboratory and field studies, with relative humidity measurements being most common. Additionally, many studies identify a critical moisture value, below which fungal growth will not occur. The values defined by relative humidity encompassed the largest range, while those defined by moisture content exhibited the highest variation. Critical values defined by equilibrium relative humidity were most consistent, and this is likely due to equilibrium relative humidity being the most relevant moisture parameter to microbial growth, since it is a reasonable measure of moisture available at surfaces, where fungi often proliferate. Several sources concur that surface moisture, particularly liquid water, is the prominent factor influencing microbial changes and that moisture in the air and within a material are of lesser importance. However, even if surface moisture is assessed, a single critical moisture level to prevent fungal growth cannot be defined, due to a number of factors, including variations in fungal genera and/or species, temperature, and nutrient availability. Despite these complexities, meaningful measurements can still be made to inform fungal growth by making localised, long-term, and continuous measurements of surface moisture. Such an approach will capture variations in a material's surface moisture, which could provide insight on a number of conditions that could lead to fungal proliferation.

  11. Extending RTM Imaging With a Focus on Head Waves

    NASA Astrophysics Data System (ADS)

    Holicki, Max; Drijkoningen, Guy

    2016-04-01

    Conventional industry seismic imaging predominantly focuses on pre-critical reflections, muting post-critical arrivals in the process. This standard approach neglects a lot of information present in the recorded wave field. This negligence has been partially remedied with the inclusion of head waves in more advanced imaging techniques, like Full Waveform Inversion (FWI). We would like to see post-critical information leave the realm of labour-intensive travel-time picking and tomographic inversion towards full migration to improve subsurface imaging and parameter estimation. We present a novel seismic imaging approach aimed at exploiting post-critical information, using the constant travel path for head-waves between shots. To this end, we propose to generalize conventional Reverse Time Migration (RTM) to scenarios where the sources for the forward and backward propagated wave-fields are not coinciding. RTM functions on the principle that backward propagated receiver data, due to a source at some locations, must overlap with the forward propagated source wave field, from the same source location, at subsurface scatterers. Where the wave-fields overlap in the subsurface there is a peak at the zero-lag cross-correlation, and this peak is used for the imaging. For the inclusion of head waves, we propose to relax the condition of coincident sources. This means that wave-fields, from non-coincident-sources, will not overlap properly in the subsurface anymore. We can make the wave-fields overlap in the subsurface again, by time shifting either the forward or backward propagated wave-fields until the wave-fields overlap. This is the same as imaging at non-zero cross-correlation lags, where the lag is the travel time difference between the two wave-fields for a given event. This allows us to steer which arrivals we would like to use for imaging. In the simplest case we could use Eikonal travel-times to generate our migration image, or we exclusively image the subsurface with the head wave from the nth-layer. To illustrate the method we apply it to a layered Earth model with five layers and compare it to conventional RTM. We will show that conventional RTM highlights interfaces, while our head-wave based images highlight layers, producing fundamentally different images. We also demonstrate that our proposed imaging scheme is more sensitive to the velocity model than conventional RTM, which is important for improved velocity model building in the future.

  12. Formation of the center of ignition in a CH3Cl-Cl2 mixture under the action of UV light

    NASA Astrophysics Data System (ADS)

    Begishev, I. R.; Belikov, A. K.; Komrakov, P. V.; Nikitin, I. S.

    2016-07-01

    The dependence of temperature on time is investigated using a microthermocouple at different distances from a UV light source in a mixture of chlorine and chloromethane. These relationships give an idea of the size and location of a center of photoignition. It is found that if the size of the reaction vessel in the direction of the luminous flux is much greater than the dimensions of the ignition center, the thermal expansion of a reacting gas mixture has a huge impact on such photoignition parameters as the critical concentration limits and the critical intensity of UV radiation. It is found that by increasing the length of the vessel, some chlorinated combustible mixtures lose the ability to ignite when exposed to UV light.

  13. Imitation versus payoff: Duality of the decision-making process demonstrates criticality and consensus formation

    NASA Astrophysics Data System (ADS)

    Turalska, M.; West, B. J.

    2014-11-01

    We consider a dual model of decision making, in which an individual forms its opinion based on contrasting mechanisms of imitation and rational calculation. The decision-making model (DMM) implements imitating behavior by means of a network of coupled two-state master equations that undergoes a phase transition at a critical value of a control parameter. The evolutionary spatial game, being a generalization of the prisoner's dilemma game, is used to determine in objective fashion the cooperative or anticooperative strategy adopted by individuals. Interactions between two sources of dynamics increases the domain of initial states attracted to phase transition dynamics beyond that of the DMM network in isolation. Additionally, on average the influence of the DMM on the game increases the final observed fraction of cooperators in the system.

  14. Influence of surfactant on the drop bag breakup in a continuous air jet stream

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Zhang, Wen-Bin; Xu, Jian-Liang; Li, Wei-Feng; Liu, Hai-Feng

    2016-05-01

    The deformation and breakup of surfactant-laden drops is a common phenomenon in nature and numerous practical applications. We investigate influence of surfactant on the drop bag breakup in a continuous air jet stream. The airflow would induce the advection diffusion of surfactant between interface and bulk of drop. Experiments indicate that the convective motions of deforming drop would induce the non-equilibrium distribution of surfactant, which leads to the change of surface tension. When the surfactant concentration is smaller than critical micelle concentration (CMC), with the increase of surface area of drop, the surface tension of liquid-air interface and the critical Weber number will increase. When the surfactant concentration is bigger than CMC, the micelle can be considered as the source term, which can supply the monomers. So in the presence of surfactant, there would be the significant nonlinear variation on the critical Weber number of bag breakup. We build the dynamic non-monotonic relationship between concentrations of surfactant and critical Weber number theoretically. In the range of parameters studied, the experimental results are consistent with the model estimates.

  15. Lévy-stable two-pion Bose-Einstein correlations in s NN = 200 GeV Au + Au collisions

    DOE PAGES

    Adare, A.; Aidala, C.; Ajitanand, N. N.; ...

    2018-06-14

    Here, we present a detailed measurement of charged two-pion correlation functions in 0–30% centrality √ sNN = 200 GeV Au + Au collisions by the PHENIX experiment at the Relativistic Heavy Ion Collider. The data are well described by Bose-Einstein correlation functions stemming from Lévy-stable source distributions. Using a fine transverse momentum binning, we extract the correlation strength parameter λ, the Lévy index of stability α, and the Lévy length scale parameter R as a function of average transverse mass of the pair m T. We find that the positively and the negatively charged pion pairs yield consistent results, andmore » their correlation functions are represented, within uncertainties, by the same Lévy-stable source functions. The λ(m T) measurements indicate a decrease of the strength of the correlations at low m T. The Lévy length scale parameter R(m T) decreases with increasing m T, following a hydrodynamically predicted type of scaling behavior. The values of the Lévy index of stability α are found to be significantly lower than the Gaussian case of α = 2, but also significantly larger than the conjectured value that may characterize the critical point of a second-order quark-hadron phase transition.« less

  16. Lévy-stable two-pion Bose-Einstein correlations in s NN = 200 GeV Au + Au collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adare, A.; Aidala, C.; Ajitanand, N. N.

    Here, we present a detailed measurement of charged two-pion correlation functions in 0–30% centrality √ sNN = 200 GeV Au + Au collisions by the PHENIX experiment at the Relativistic Heavy Ion Collider. The data are well described by Bose-Einstein correlation functions stemming from Lévy-stable source distributions. Using a fine transverse momentum binning, we extract the correlation strength parameter λ, the Lévy index of stability α, and the Lévy length scale parameter R as a function of average transverse mass of the pair m T. We find that the positively and the negatively charged pion pairs yield consistent results, andmore » their correlation functions are represented, within uncertainties, by the same Lévy-stable source functions. The λ(m T) measurements indicate a decrease of the strength of the correlations at low m T. The Lévy length scale parameter R(m T) decreases with increasing m T, following a hydrodynamically predicted type of scaling behavior. The values of the Lévy index of stability α are found to be significantly lower than the Gaussian case of α = 2, but also significantly larger than the conjectured value that may characterize the critical point of a second-order quark-hadron phase transition.« less

  17. Sources of hydrocarbons in urban road dust: Identification, quantification and prediction.

    PubMed

    Mummullage, Sandya; Egodawatta, Prasanna; Ayoko, Godwin A; Goonetilleke, Ashantha

    2016-09-01

    Among urban stormwater pollutants, hydrocarbons are a significant environmental concern due to their toxicity and relatively stable chemical structure. This study focused on the identification of hydrocarbon contributing sources to urban road dust and approaches for the quantification of pollutant loads to enhance the design of source control measures. The study confirmed the validity of the use of mathematical techniques of principal component analysis (PCA) and hierarchical cluster analysis (HCA) for source identification and principal component analysis/absolute principal component scores (PCA/APCS) receptor model for pollutant load quantification. Study outcomes identified non-combusted lubrication oils, non-combusted diesel fuels and tyre and asphalt wear as the three most critical urban hydrocarbon sources. The site specific variabilities of contributions from sources were replicated using three mathematical models. The models employed predictor variables of daily traffic volume (DTV), road surface texture depth (TD), slope of the road section (SLP), effective population (EPOP) and effective impervious fraction (EIF), which can be considered as the five governing parameters of pollutant generation, deposition and redistribution. Models were developed such that they can be applicable in determining hydrocarbon contributions from urban sites enabling effective design of source control measures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A sensitivity analysis of a surface energy balance model to LAI (Leaf Area Index)

    NASA Astrophysics Data System (ADS)

    Maltese, A.; Cannarozzo, M.; Capodici, F.; La Loggia, G.; Santangelo, T.

    2008-10-01

    The LAI is a key parameter in hydrological processes, especially in the physically based distribution models. It is a critical ecosystem attribute since physiological processes such as photosynthesis, transpiration and evaporation depend on it. The diffusion of water vapor, momentum, heat and light through the canopy is regulated by the distribution and density of the leaves, branches, twigs and stems. The LAI influences the sensible heat flux H in the surface energy balance single source models through the calculation of the roughness length and of the displacement height. The aerodynamic resistance between the soil and within-canopy source height is a function of the LAI through the roughness length. This research carried out a sensitivity analysis of some of the most important parameters of surface energy balance models to the LAI time variation, in order to take into account the effects of the LAI variation with the phenological period. Finally empirical retrieved relationships between field spectroradiometric data and the field LAI measured via a light-sensitive instrument are presented for a cereal field.

  19. A Study of the Dependence of Microsegregation on Critical Solidification Parameters in Rapidly-Quenched Structures.

    DTIC Science & Technology

    1980-12-01

    a formulation given in many sources (Refs. 1-3). The laser is assumed to penetrate completely through the material (making a " keyhole ") and the heat...absorbed laser power as determined from calor- imetric measurements. The analytical predictions were brought to close agree- ment with the experimental...kW power setting would be about 45 kW/cm 2. This value is close to the 50 kW/cm2 line predicted by the model. As in Fig. 13, the laser dwell time is

  20. A Penning sputter ion source with very low energy spread

    NASA Astrophysics Data System (ADS)

    Nouri, Z.; Li, R.; Holt, R. A.; Rosner, S. D.

    2010-03-01

    We have developed a version of the Frankfurt Penning ion source that produces ion beams with very low energy spreads of ˜3 eV, while operating in a new discharge mode characterized by very high pressure, low voltage, and high current. The extracted ions also comprise substantial metastable and doubly charged species. Detailed studies of the operating parameters of the source showed that careful adjustment of the magnetic field and gas pressure is critical to achieving optimum performance. We used a laser-fluorescence method of energy analysis to characterize the properties of the extracted ion beam with a resolving power of 1×10 4, and to measure the absolute ion beam energy to an accuracy of 4 eV in order to provide some insight into the distribution of plasma potential within the ion source. This characterization method is widely applicable to accelerator beams, though not universal. The low energy spread, coupled with the ability to produce intense ion beams from almost any gas or conducting solid, make this source very useful for high-resolution spectroscopic measurements on fast-ion beams.

  1. Improved response functions for gamma-ray skyshine analyses

    NASA Astrophysics Data System (ADS)

    Shultis, J. K.; Faw, R. E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.

  2. Improved response functions for gamma-ray skyshine analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less

  3. Challenges of UV light processing of low UVT foods and beverages

    NASA Astrophysics Data System (ADS)

    Koutchma, Tatiana

    2010-08-01

    Ultraviolet (UV) technology holds promise as a low cost non-thermal alternative to heat pasteurization of liquid foods and beverages. However, its application for foods is still limited due to low UV transmittance (LUVT). LUVT foods have a diverse range of chemical (pH, Brix, Aw), physical (density and viscosity) and optical properties (absorbance and scattering) that are critical for systems and process designs. The commercially available UV sources tested for foods include low and medium pressure mercury lamps (LPM and MPM), excimer and pulsed lamps (PUV). The LPM and excimer lamps are monochromatic sources whereas emission of MPM and PUV is polychromatic. The optimized design of UV-systems and UV-sources with parameters that match to specific product spectra have a potential to make UV treatments of LUVT foods more effective and will serve its further commercialization. In order to select UV source for specific food application, processing effects on nutritional, quality, sensorial and safety markers have to be evaluated. This paper will review current status of UV technology for food processing along with regulatory requirements. Discussion of approaches and results of measurements of chemico-physical and optical properties of various foods (fresh juices, milk, liquid whey proteins and sweeteners) that are critical for UV process and systems design will follow. Available UV sources did not prove totally effective either resulting in low microbial reduction or UV over-dosing of the product thereby leading to sensory changes. Beam shaping of UV light presents new opportunities to improve dosage uniformity and delivery of UV photons in LUVT foods.

  4. Utilization of Global Reference Atmosphere Model (GRAM) for shuttle entry

    NASA Technical Reports Server (NTRS)

    Joosten, Kent

    1987-01-01

    At high latitudes, dispersions in values of density for the middle atmosphere from the Global Reference Atmosphere Model (GRAM) are observed to be large, particularly in the winter. Trajectories have been run from 28.5 deg to 98 deg. The critical part of the atmosphere for reentry is 250,000 to 270,000 ft. 250,000 ft is the altitude where the shuttle trajectory levels out. For ascending passes the critical region occurs near the equator. For descending entries the critical region is in northern latitudes. The computed trajectory is input to the GRAM, which computes means and deviations of atmospheric parameters at each point along the trajectory. There is little latitude dispersion for the ascending passes; the strongest source of deviations is seasonal; however, very wide seasonal and latitudinal deviations are exhibited for the descending passes at all orbital inclinations. For shuttle operations the problem is control to maintain the correct entry corridor and avoid either aerodynamic skipping or excessive heat loads.

  5. A critical review of noise production models for turbulent, gas-fueled burners

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.

    1984-01-01

    The combustion noise literature for the period between 1952 and early 1984 is critically reviewed. Primary emphasis is placed on past theoretical and semi-empirical attempts to predict or explain observed direct combustion noise characteristics of turbulent, gas-fueled burners; works involving liquid-fueled burners are reviewed only when ideas equally applicable to gas-fueled burners are pesented. The historical development of the most important contemporary direct combustion noise theories is traced, and the theories themselves are compared and criticized. While most theories explain combustion noise production by turbulent flames in terms of randomly distributed acoustic monopoles produced by turbulent mixing of products and reactants, none is able to predict the sound pressure in the acoustic farfield of a practical burner because of the lack of a proven model which relates the combustion noise source strenght at a given frequency to the design and operating parameters of the burner. Recommendations are given for establishing a benchmark-quality data base needed to support the development of such a model.

  6. The critical role of uncertainty in projections of hydrological extremes

    NASA Astrophysics Data System (ADS)

    Meresa, Hadush K.; Romanowicz, Renata J.

    2017-08-01

    This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.

  7. Buckling of thermally fluctuating spherical shells: Parameter renormalization and thermally activated barrier crossing

    NASA Astrophysics Data System (ADS)

    Baumgarten, Lorenz; Kierfeld, Jan

    2018-05-01

    We study the influence of thermal fluctuations on the buckling behavior of thin elastic capsules with spherical rest shape. Above a critical uniform pressure, an elastic capsule becomes mechanically unstable and spontaneously buckles into a shape with an axisymmetric dimple. Thermal fluctuations affect the buckling instability by two mechanisms. On the one hand, thermal fluctuations can renormalize the capsule's elastic properties and its pressure because of anharmonic couplings between normal displacement modes of different wavelengths. This effectively lowers its critical buckling pressure [Košmrlj and Nelson, Phys. Rev. X 7, 011002 (2017), 10.1103/PhysRevX.7.011002]. On the other hand, buckled shapes are energetically favorable already at pressures below the classical buckling pressure. At these pressures, however, buckling requires to overcome an energy barrier, which only vanishes at the critical buckling pressure. In the presence of thermal fluctuations, the capsule can spontaneously overcome an energy barrier of the order of the thermal energy by thermal activation already at pressures below the critical buckling pressure. We revisit parameter renormalization by thermal fluctuations and formulate a buckling criterion based on scale-dependent renormalized parameters to obtain a temperature-dependent critical buckling pressure. Then we quantify the pressure-dependent energy barrier for buckling below the critical buckling pressure using numerical energy minimization and analytical arguments. This allows us to obtain the temperature-dependent critical pressure for buckling by thermal activation over this energy barrier. Remarkably, both parameter renormalization and thermal activation lead to the same parameter dependence of the critical buckling pressure on temperature, capsule radius and thickness, and Young's modulus. Finally, we study the combined effect of parameter renormalization and thermal activation by using renormalized parameters for the energy barrier in thermal activation to obtain our final result for the temperature-dependent critical pressure, which is significantly below the results if only parameter renormalization or only thermal activation is considered.

  8. A criticality result for polycycles in a family of quadratic reversible centers

    NASA Astrophysics Data System (ADS)

    Rojas, D.; Villadelprat, J.

    2018-06-01

    We consider the family of dehomogenized Loud's centers Xμ = y (x - 1)∂x + (x + Dx2 + Fy2)∂y, where μ = (D , F) ∈R2, and we study the number of critical periodic orbits that emerge or disappear from the polycycle at the boundary of the period annulus. This number is defined exactly the same way as the well-known notion of cyclicity of a limit periodic set and we call it criticality. The previous results on the issue for the family {Xμ , μ ∈R2 } distinguish between parameters with criticality equal to zero (regular parameters) and those with criticality greater than zero (bifurcation parameters). A challenging problem not tackled so far is the computation of the criticality of the bifurcation parameters, which form a set ΓB of codimension 1 in R2. In the present paper we succeed in proving that a subset of ΓB has criticality equal to one.

  9. Critical state of sand matrix soils.

    PubMed

    Marto, Aminaton; Tan, Choy Soon; Makhtar, Ahmad Mahir; Kung Leong, Tiong

    2014-01-01

    The Critical State Soil Mechanic (CSSM) is a globally recognised framework while the critical states for sand and clay are both well established. Nevertheless, the development of the critical state of sand matrix soils is lacking. This paper discusses the development of critical state lines and corresponding critical state parameters for the investigated material, sand matrix soils using sand-kaolin mixtures. The output of this paper can be used as an interpretation framework for the research on liquefaction susceptibility of sand matrix soils in the future. The strain controlled triaxial test apparatus was used to provide the monotonic loading onto the reconstituted soil specimens. All tested soils were subjected to isotropic consolidation and sheared under undrained condition until critical state was ascertain. Based on the results of 32 test specimens, the critical state lines for eight different sand matrix soils were developed together with the corresponding values of critical state parameters, M, λ, and Γ. The range of the value of M, λ, and Γ is 0.803-0.998, 0.144-0.248, and 1.727-2.279, respectively. These values are comparable to the critical state parameters of river sand and kaolin clay. However, the relationship between fines percentages and these critical state parameters is too scattered to be correlated.

  10. Critical State of Sand Matrix Soils

    PubMed Central

    Marto, Aminaton; Tan, Choy Soon; Makhtar, Ahmad Mahir; Kung Leong, Tiong

    2014-01-01

    The Critical State Soil Mechanic (CSSM) is a globally recognised framework while the critical states for sand and clay are both well established. Nevertheless, the development of the critical state of sand matrix soils is lacking. This paper discusses the development of critical state lines and corresponding critical state parameters for the investigated material, sand matrix soils using sand-kaolin mixtures. The output of this paper can be used as an interpretation framework for the research on liquefaction susceptibility of sand matrix soils in the future. The strain controlled triaxial test apparatus was used to provide the monotonic loading onto the reconstituted soil specimens. All tested soils were subjected to isotropic consolidation and sheared under undrained condition until critical state was ascertain. Based on the results of 32 test specimens, the critical state lines for eight different sand matrix soils were developed together with the corresponding values of critical state parameters, M, λ, and Γ. The range of the value of M, λ, and Γ is 0.803–0.998, 0.144–0.248, and 1.727–2.279, respectively. These values are comparable to the critical state parameters of river sand and kaolin clay. However, the relationship between fines percentages and these critical state parameters is too scattered to be correlated. PMID:24757417

  11. Critical parameters for sterilization of oil palm fruit by microwave irradiation

    NASA Astrophysics Data System (ADS)

    Sarah, Maya; Taib, M. R.

    2017-08-01

    Study to evaluate critical parameters for microwave irradiation to sterilize oil palm fruit was carried out at power density of 560 to 1120 W/kg. Critical parameters are important to ensure moisture loss during sterilization exceed the critical moisture (Mc) but less than maximum moisture (Mmax). Critical moisture in this study was determined according to dielectric loss factor of heated oil palm fruits at 2450 MHz. It was obtained from slope characterization of dielectric loss factor-vs-moisture loss curve. The Mc was used to indicate critical temperature (Tc) and critical time (tc) for microwave sterilization. To ensure moisture loss above critical value but not exceed maximum value, the combinations of time-temperature for sterilization of oil palm fruits by microwave irradiation were 6 min and 75°C to 17 min and 82°C respectively.

  12. Dosimetric characterization of the M−15 high‐dose‐rate Iridium−192 brachytherapy source using the AAPM and ESTRO formalism

    PubMed Central

    Thanh, Minh‐Tri Ho; Munro, John J.

    2015-01-01

    The Source Production & Equipment Co. (SPEC) model M−15 is a new Iridium−192 brachytherapy source model intended for use as a temporary high‐dose‐rate (HDR) brachytherapy source for the Nucletron microSelectron Classic afterloading system. The purpose of this study is to characterize this HDR source for clinical application by obtaining a complete set of Monte Carlo calculated dosimetric parameters for the M‐15, as recommended by AAPM and ESTRO, for isotopes with average energies greater than 50 keV. This was accomplished by using the MCNP6 Monte Carlo code to simulate the resulting source dosimetry at various points within a pseudoinfinite water phantom. These dosimetric values next were converted into the AAPM and ESTRO dosimetry parameters and the respective statistical uncertainty in each parameter also calculated and presented. The M−15 source was modeled in an MCNP6 Monte Carlo environment using the physical source specifications provided by the manufacturer. Iridium−192 photons were uniformly generated inside the iridium core of the model M−15 with photon and secondary electron transport replicated using photoatomic cross‐sectional tables supplied with MCNP6. Simulations were performed for both water and air/vacuum computer models with a total of 4×109 sources photon history for each simulation and the in‐air photon spectrum filtered to remove low‐energy photons below δ=10%keV. Dosimetric data, including D(r,θ),gL(r),F(r,θ),Φan(r), and φ¯an, and their statistical uncertainty were calculated from the output of an MCNP model consisting of an M−15 source placed at the center of a spherical water phantom of 100 cm diameter. The air kerma strength in free space, SK, and dose rate constant, Λ, also was computed from a MCNP model with M−15 Iridium−192 source, was centered at the origin of an evacuated phantom in which a critical volume containing air at STP was added 100 cm from the source center. The reference dose rate, D˙(r0,θ0)≡D˙(1cm,π/2), is found to be 4.038±0.064 cGy mCi−1 h−1. The air kerma strength, SK, is reported to be 3.632±0.086 cGy cm2 mCi−1 g−1, and the dose rate constant, Λ, is calculated to be 1.112±0.029 cGy h−1 U−1. The normalized dose rate, radial dose function, and anisotropy function with their uncertainties were computed and are represented in both tabular and graphical format in the report. A dosimetric study was performed of the new M−15 Iridium−192 HDR brachytherapy source using the MCNP6 radiation transport code. Dosimetric parameters, including the dose‐rate constant, radial dose function, and anisotropy function, were calculated in accordance with the updated AAPM and ESTRO dosimetric parameters for brachytherapy sources of average energy greater than 50 keV. These data therefore may be applied toward the development of a treatment planning program and for clinical use of the source. PACS numbers: 87.56.bg, 87.53.Jw PMID:26103489

  13. Space Qualification Test of a-Silicon Solar Cell Modules

    NASA Technical Reports Server (NTRS)

    Kim, Q.; Lawton, R. A.; Manion, S. J.; Okuno, J. O.; Ruiz, R. P.; Vu, D. T.; Vu, D. T.; Kayali, S. A.; Jeffrey, F. R.

    2004-01-01

    The basic requirements of solar cell modules for space applications are generally described in MIL-S-83576 for the specific needs of the USAF. However, the specifications of solar cells intended for use on space terrestrial applications are not well defined. Therefore, this qualifications test effort was concentrated on critical areas specific to the microseismometer probe which is intended to be included in the Mars microprobe programs. Parameters that were evaluated included performance dependence on: illuminating angles, terrestrial temperatures, lifetime, as well as impact landing conditions. Our qualification efforts were limited to these most critical areas of concern. Most of the tested solar cell modules have met the requirements of the program except the impact tests. Surprisingly, one of the two single PIN 2 x 1 amorphous solar cell modules continued to function even after the 80000G impact tests. The output power parameters, Pout, FF, Isc and Voc, of the single PIN amorphous solar cell module were found to be 3.14 mW, 0.40, 9.98 mA and 0.78 V, respectively. These parameters are good enough to consider the solar module as a possible power source for the microprobe seismometer. Some recommendations were made to improve the usefulness of the amorphous silicon solar cell modules in space terrestrial applications, based on the results obtained from the intensive short term lab test effort.

  14. With-in host dynamics of L. monocytogenes and thresholds for distinct infection scenarios.

    PubMed

    Rahman, Ashrafur; Munther, Daniel; Fazil, Aamir; Smith, Ben; Wu, Jianhong

    2018-05-26

    The case fatality and illness rates associated with L. monocytogenes continue to pose a serious public health burden despite the significant efforts and control protocol administered by private and public sectors. Due to the advance in surveillance and improvement in detection methodology, the knowledge of sources, transmission routes, growth potential in food process units and storage, effect of pH and temperature are well understood. However, the with-in host growth and transmission mechanisms of L. monocytogenes, particularly within the human host, remain unclear, largely due to the limited access to scientific experimentation on the human population. In order to provide insight towards the human immune response to the infection caused by L. monocytogenes, we develop a with-in host mathematical model. The model explains, in terms of biological parameters, the states of asymptomatic infection, mild infection and systemic infection leading to listeriosis. The activation and proliferation of T-cells are found to be critical for the susceptibility of the infection. Utilizing stability analysis and numerical simulation, the ranges of the critical parameters relative to infection states are established. Bifurcation analysis shows the impact of the differences of these parameters on the dynamics of the model. Finally, we present model applications in regards to predicting the risk potential of listeriosis relative to the susceptible human population. Copyright © 2018. Published by Elsevier Ltd.

  15. Development and evaluation of paclitaxel nanoparticles using a quality-by-design approach.

    PubMed

    Yerlikaya, Firat; Ozgen, Aysegul; Vural, Imran; Guven, Olgun; Karaagaoglu, Ergun; Khan, Mansoor A; Capan, Yilmaz

    2013-10-01

    The aims of this study were to develop and characterize paclitaxel nanoparticles, to identify and control critical sources of variability in the process, and to understand the impact of formulation and process parameters on the critical quality attributes (CQAs) using a quality-by-design (QbD) approach. For this, a risk assessment study was performed with various formulation and process parameters to determine their impact on CQAs of nanoparticles, which were determined to be average particle size, zeta potential, and encapsulation efficiency. Potential risk factors were identified using an Ishikawa diagram and screened by Plackett-Burman design and finally nanoparticles were optimized using Box-Behnken design. The optimized formulation was further characterized by Fourier transform infrared spectroscopy, X-ray diffractometry, differential scanning calorimetry, scanning electron microscopy, atomic force microscopy, and gas chromatography. It was observed that paclitaxel transformed from crystalline state to amorphous state while totally encapsulating into the nanoparticles. The nanoparticles were spherical, smooth, and homogenous with no dichloromethane residue. In vitro cytotoxicity test showed that the developed nanoparticles are more efficient than free paclitaxel in terms of antitumor activity (more than 25%). In conclusion, this study demonstrated that understanding formulation and process parameters with the philosophy of QbD is useful for the optimization of complex drug delivery systems. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  16. TU-D-201-06: HDR Plan Prechecks Using Eclipse Scripting API

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palaniswaamy, G; Morrow, A; Kim, S

    Purpose: Automate brachytherapy treatment plan quality check using Eclipse v13.6 scripting API based on pre-configured rules to minimize human error and maximize efficiency. Methods: The HDR Precheck system is developed based on a rules-driven approach using Eclipse scripting API. This system checks for critical plan parameters like channel length, first source position, source step size and channel mapping. The planned treatment time is verified independently based on analytical methods. For interstitial or SAVI APBI treatment plans, a Patterson-Parker system calculation is performed to verify the planned treatment time. For endobronchial treatments, an analytical formula from TG-59 is used. Acceptable tolerancesmore » were defined based on clinical experiences in our department. The system was designed to show PASS/FAIL status levels. Additional information, if necessary, is indicated appropriately in a separate comments field in the user interface. Results: The HDR Precheck system has been developed and tested to verify the treatment plan parameters that are routinely checked by the clinical physicist. The report also serves as a reminder or checklist for the planner to perform any additional critical checks such as applicator digitization or scenarios where the channel mapping was intentionally changed. It is expected to reduce the current manual plan check time from 15 minutes to <1 minute. Conclusion: Automating brachytherapy plan prechecks significantly reduces treatment plan precheck time and reduces human errors. When fully developed, this system will be able to perform TG-43 based second check of the treatment planning system’s dose calculation using random points in the target and critical structures. A histogram will be generated along with tabulated mean and standard deviation values for each structure. A knowledge database will also be developed for Brachyvision plans which will then be used for knowledge-based plan quality checks to further reduce treatment planning errors and increase confidence in the planned treatment.« less

  17. Probing ultra-fast processes with high dynamic range at 4th-generation light sources: Arrival time and intensity binning at unprecedented repetition rates.

    PubMed

    Kovalev, S; Green, B; Golz, T; Maehrlein, S; Stojanovic, N; Fisher, A S; Kampfrath, T; Gensch, M

    2017-03-01

    Understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systems and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.

  18. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  19. QCD nature of dark energy at finite temperature: Cosmological implications

    NASA Astrophysics Data System (ADS)

    Azizi, K.; Katırcı, N.

    2016-05-01

    The Veneziano ghost field has been proposed as an alternative source of dark energy, whose energy density is consistent with the cosmological observations. In this model, the energy density of the QCD ghost field is expressed in terms of QCD degrees of freedom at zero temperature. We extend this model to finite temperature to search the model predictions from late time to early universe. We depict the variations of QCD parameters entering the calculations, dark energy density, equation of state, Hubble and deceleration parameters on temperature from zero to a critical temperature. We compare our results with the observations and theoretical predictions existing at different eras. It is found that this model safely defines the universe from quark condensation up to now and its predictions are not in tension with those of the standard cosmology. The EoS parameter of dark energy is dynamical and evolves from -1/3 in the presence of radiation to -1 at late time. The finite temperature ghost dark energy predictions on the Hubble parameter well fit to those of Λ CDM and observations at late time.

  20. Gravitational waves from plunges into Gargantua

    NASA Astrophysics Data System (ADS)

    Compère, Geoffrey; Fransen, Kwinten; Hertog, Thomas; Long, Jiang

    2018-05-01

    We analytically compute time domain gravitational waveforms produced in the final stages of extreme mass ratio inspirals of non-spinning compact objects into supermassive nearly extremal Kerr black holes. Conformal symmetry relates all corotating equatorial orbits in the geodesic approximation to circular orbits through complex conformal transformations. We use this to obtain the time domain Teukolsky perturbations for generic equatorial corotating plunges in closed form. The resulting gravitational waveforms consist of an intermediate polynomial ringdown phase in which the decay rate depends on the impact parameters, followed by an exponential quasi-normal mode decay. The waveform amplitude exhibits critical behavior when the orbital angular momentum tends to a minimal value determined by the innermost stable circular orbit. We show that either near-critical or large angular momentum leads to a significant extension of the LISA observable volume of gravitational wave sources of this kind.

  1. Critical parameters of hard-core Yukawa fluids within the structural theory

    NASA Astrophysics Data System (ADS)

    Bahaa Khedr, M.; Osman, S. M.

    2012-10-01

    A purely statistical mechanical approach is proposed to account for the liquid-vapor critical point based on the mean density approximation (MDA) of the direct correlation function. The application to hard-core Yukawa (HCY) fluids facilitates the use of the series mean spherical approximation (SMSA). The location of the critical parameters for HCY fluid with variable intermolecular range is accurately calculated. Good agreement is observed with computer simulation results and with the inverse temperature expansion (ITE) predictions. The influence of the potential range on the critical parameters is demonstrated and the universality of the critical compressibility ratio is discussed. The behavior of the isochoric and isobaric heat capacities along the equilibrium line and the near vicinity of the critical point is discussed in details.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khomkin, A. L., E-mail: alhomkin@mail.ru; Shumikhin, A. S.

    The conductivity of metal vapors at the critical point and near it has been considered. The liquid-metal conductivity originates in this region. The thermodynamic parameters of the critical point, the density of conduction electrons, and the conductivities of various metal vapors have been calculated within the unified approach. It has been proposed to consider the conductivity at the critical point—critical conductivity—as the fourth critical parameter in addition to the density, temperature, and pressure.

  3. Optimum Design of Forging Process Parameters and Preform Shape under Uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2004-06-01

    Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness.

  4. Methods and pitfalls of measuring thermal preference and tolerance in lizards.

    PubMed

    Camacho, Agustín; Rusch, Travis W

    2017-08-01

    Understanding methodological and biological sources of bias during the measurement of thermal parameters is essential for the advancement of thermal biology. For more than a century, studies on lizards have deepened our understanding of thermal ecophysiology, employing multiple methods to measure thermal preferences and tolerances. We reviewed 129 articles concerned with measuring preferred body temperature (PBT), voluntary thermal tolerance, and critical temperatures of lizards to offer: a) an overview of the methods used to measure and report these parameters, b) a summary of the methodological and biological factors affecting thermal preference and tolerance, c) recommendations to avoid identified pitfalls, and d) directions for continued progress in our application and understanding of these thermal parameters. We emphasize the need for more methodological and comparative studies. Lastly, we urge researchers to provide more detailed methodological descriptions and suggest ways to make their raw data more informative to increase the utility of thermal biology studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A Semi-Quantitative Study of the Impact of Bacterial Pollutant Uptake Capability on Bioremediation in a Saturated Sand-Packed Two-Dimensional Microcosm: Experiments and Simulation

    NASA Astrophysics Data System (ADS)

    Zheng, S.; Ford, R.; Van den Berg, B.

    2016-12-01

    The transport of microorganisms through the saturated porous matrix of soil is critical to the success of bioremediation in polluted groundwater systems. Chemotaxis can direct the movement of microorganisms toward higher concentration of pollutants, which they chemically transform and use as carbon and energy sources, resulting in enhanced bioremediation efficiency. In addition to accessibility and degradation kinetics, bacterial uptake of the pollutants is a critical step in bioremediation. In order to study the impact of bacterial pollutant uptake capability on bioremediation, a two-dimensional microcosm packed with saturated sand was set up to mimic the natural groundwater system where mass transfer limitation poses a barrier (see the figure below). Toluene source was continuously injected into the microcosm from an injection port. Pseudomonas putida F1, either wild-type (WT) or genetic mutants (TodX knockout, TodX and CymD knockout) that exhibited impaired toluene uptake capability, were co-injected with a conservative tracer into the microcosm either above or below the toluene. After each run, samples were collected from a dozen effluent ports to determine the concentration profiles of the bacteria and tracers. Toluene serves as the only carbon source throughout the microcosm. So the percent recovery, which is the ratio of cells collected at the outlet over that at the inlet, can be used as the indicator for bioremediation efficiency. Comparisons were made between the WT and mutant strains, where PpF1 WT showed greater proliferation than the mutants. Comparisons for low and high toluene source concentrations showed that the PpF1 mutant strains exhibited a greater degree of growth inhibition than WT at higher toluene concentration. A mathematical model was applied to evaluate the impact of various parameters on toluene uptake illustrating that with reasonable parameter estimates, the bioremediation efficiency was more sensitive to proliferation than transport. The results show that in a two-dimensional microcosm mimicking features of the natural groundwater system, the toluene uptake capability of bacteria can be the "remediation-rate-liming" step, implying the potential of engineering bacteria for bioremediation efficiency enhancement.

  6. Terrestrial photovoltaic cell process testing

    NASA Technical Reports Server (NTRS)

    Burger, D. R.

    1985-01-01

    The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.

  7. Terrestrial photovoltaic cell process testing

    NASA Astrophysics Data System (ADS)

    Burger, D. R.

    The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.

  8. Photon orbits and thermodynamic phase transition of d -dimensional charged AdS black holes

    NASA Astrophysics Data System (ADS)

    Wei, Shao-Wen; Liu, Yu-Xiao

    2018-05-01

    We study the relationship between the null geodesics and thermodynamic phase transition for the charged AdS black hole. In the reduced parameter space, we find that there exist nonmonotonic behaviors of the photon sphere radius and the minimum impact parameter for the pressure below its critical value. The study also shows that the changes of the photon sphere radius and the minimum impact parameter can serve as order parameters for the small-large black hole phase transition. In particular, these changes have an universal exponent of 1/2 near the critical point for any dimension d of spacetime. These results imply that there may exist universal critical behavior of gravity near the thermodynamic critical point of the black hole system.

  9. Risk assessment for tephra dispersal and sedimentation: the example of four Icelandic volcanoes

    NASA Astrophysics Data System (ADS)

    Biass, Sebastien; Scaini, Chiara; Bonadonna, Costanza; Smith, Kate; Folch, Arnau; Höskuldsson, Armann; Galderisi, Adriana

    2014-05-01

    In order to assist the elaboration of proactive measures for the management of future Icelandic volcanic eruptions, we developed a new approach to assess the impact associated with tephra dispersal and sedimentation at various scales and for multiple sources. Target volcanoes are Hekla, Katla, Eyjafjallajökull and Askja, selected for their high probabilities of eruption and/or their high potential impact. We combined stratigraphic studies, probabilistic strategies and numerical modelling to develop comprehensive eruption scenarios and compile hazard maps for local ground deposition and regional atmospheric concentration using both TEPHRA2 and FALL3D models. New algorithms for the identification of comprehensive probability density functions of eruptive source parameters were developed for both short and long-lasting activity scenarios. A vulnerability assessment of socioeconomic and territorial aspects was also performed at both national and continental scales. The identification of relevant vulnerability indicators allowed for the identification of the most critical areas and territorial nodes. At a national scale, the vulnerability of economic activities and the accessibility to critical infrastructures was assessed. At a continental scale, we assessed the vulnerability of the main airline routes and airports. Resulting impact and risk were finally assessed by combining hazard and vulnerability analysis.

  10. An Improved Method to Control the Critical Parameters of a Multivariable Control System

    NASA Astrophysics Data System (ADS)

    Subha Hency Jims, P.; Dharmalingam, S.; Wessley, G. Jims John

    2017-10-01

    The role of control systems is to cope with the process deficiencies and the undesirable effect of the external disturbances. Most of the multivariable processes are highly iterative and complex in nature. Aircraft systems, Modern Power Plants, Refineries, Robotic systems are few such complex systems that involve numerous critical parameters that need to be monitored and controlled. Control of these important parameters is not only tedious and cumbersome but also is crucial from environmental, safety and quality perspective. In this paper, one such multivariable system, namely, a utility boiler has been considered. A modern power plant is a complex arrangement of pipework and machineries with numerous interacting control loops and support systems. In this paper, the calculation of controller parameters based on classical tuning concepts has been presented. The controller parameters thus obtained and employed has controlled the critical parameters of a boiler during fuel switching disturbances. The proposed method can be applied to control the critical parameters like elevator, aileron, rudder, elevator trim rudder and aileron trim, flap control systems of aircraft systems.

  11. Elastomeric enriched biodegradable polyurethane sponges for critical bone defects: a successful case study reducing donor site morbidity.

    PubMed

    Lavrador, Catarina; Mascarenhas, Ramiro; Coelho, Paulo; Brites, Cláudia; Pereira, Alfredo; Gogolewski, Sylwester

    2016-03-01

    Bone substitutes have been a critical issue as the natural source can seldom provide enough bone to support full healing. No bone substitute complies with all necessary functions and characteristics that an autograft does. Polyurethane sponges have been used as a surgical alternative to cancellous bone grafts for critical bone defect donor sites. Critical bone defects were created on the tibial tuberosity and iliac crest using an ovine model. In group I (control-untreated), no bone regeneration was observed in any animal. In group II (defects left empty but covered with a microporous polymeric membrane), the new bone bridged the top ends in all animals. In groups III and IV, bone defects were implanted with polyurethane scaffolds modified with biologically active compounds, and bone regeneration was more efficient than in group II. In groups III and IV there were higher values of bone regeneration specific parameters used for evaluation (P < 0.05) although the comparison between these groups was not possible. The results obtained in this study suggest that biodegradable polyurethane substitutes modified with biologically active substances may offer an alternative to bone graft, reducing donor site morbidity associated with autogenous cancellous bone harvesting.

  12. Power flow analysis of two coupled plates with arbitrary characteristics

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1988-01-01

    The limitation of keeping two plates identical is removed and the vibrational power input and output are evaluated for different area ratios, plate thickness ratios, and for different values of the structural damping loss factor for the source plate (plate with excitation) and the receiver plate. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to be able to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. As was done previously, results obtained from the mobility power flow approach will be compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between SEA results and the mobility power flow results. Furthermore, the benefits that can be derived from using the mobility power flow approach, are also examined.

  13. Critical bounds on noise and SNR for robust estimation of real-time brain activity from functional near infra-red spectroscopy.

    PubMed

    Aqil, Muhammad; Jeong, Myung Yung

    2018-04-24

    The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE PAGES

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...

    2018-04-30

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  15. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  16. The use of sub-critical water hydrolysis for the recovery of peptides and free amino acids from food processing wastes. Review of sources and main parameters.

    PubMed

    Marcet, Ismael; Álvarez, Carlos; Paredes, Benjamín; Díaz, Mario

    2016-03-01

    Food industry processing wastes are produced in enormous amounts every year, such wastes are usually disposed with the corresponding economical cost it implies, in the best scenario they can be used for pet food or composting. However new promising technologies and tools have been developed in the last years aimed at recovering valuable compounds from this type of materials. In particular, sub-critical water hydrolysis (SWH) has been revealed as an interesting way for recovering high added-value molecules, and its applications have been broadly referred in the bibliography. Special interest has been focused on recovering protein hydrolysates in form of peptides or amino acids, from both animal and vegetable wastes, by means of SWH. These recovered biomolecules have a capital importance in fields such as biotechnology research, nutraceuticals, and above all in food industry, where such products can be applied with very different objectives. Present work reviews the current state of art of using sub-critical water hydrolysis for protein recovering from food industry wastes. Key parameters as reaction time, temperature, amino acid degradation and kinetic constants have been discussed. Besides, the characteristics of the raw material and the type of products that can be obtained depending on the substrate have been reviewed. Finally, the application of these hydrolysates based on their functional properties and antioxidant activity is described. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Searching for continuous gravitational wave sources in binary systems

    NASA Astrophysics Data System (ADS)

    Dhurandhar, Sanjeev V.; Vecchio, Alberto

    2001-06-01

    We consider the problem of searching for continuous gravitational wave (cw) sources orbiting a companion object. This issue is of particular interest because the Low mass x-ray binaries (LMXB's), and among them Sco X-1, the brightest x-ray source in the sky, might be marginally detectable with ~2 y coherent observation time by the Earth-based laser interferometers expected to come on line by 2002 and clearly observable by the second generation of detectors. Moreover, several radio pulsars, which could be deemed to be cw sources, are found to orbit a companion star or planet, and the LIGO-VIRGO-GEO600 network plans to continuously monitor such systems. We estimate the computational costs for a search launched over the additional five parameters describing generic elliptical orbits (up to e<~0.8) using match filtering techniques. These techniques provide the optimal signal-to-noise ratio and also a very clear and transparent theoretical framework. Since matched filtering will be implemented in the final and the most computationally expensive stage of the hierarchical strategies, the theoretical framework provided here can be used to determine the computational costs. In order to disentangle the computational burden involved in the orbital motion of the cw source from the other source parameters (position in the sky and spin down) and reduce the complexity of the analysis, we assume that the source is monochromatic (there is no intrinsic change in its frequency) and its location in the sky is exactly known. The orbital elements, on the other hand, are either assumed to be completely unknown or only partly known. We provide ready-to-use analytical expressions for the number of templates required to carry out the searches in the astrophysically relevant regions of the parameter space and how the computational cost scales with the ranges of the parameters. We also determine the critical accuracy to which a particular parameter must be known, so that no search is needed for it; we provide rigorous statements, based on the geometrical formulation of data analysis, concerning the size of the parameter space so that a particular neutron star is a one-filter target. This result is formulated in a completely general form, independent of the particular kind of source, and can be applied to any class of signals whose waveform can be accurately predicted. We apply our theoretical analysis to Sco X-1 and the 44 neutron stars with binary companions which are listed in the most updated version of the radio pulsar catalog. For up to ~3 h of coherent integration time, Sco X-1 will need at most a few templates; for 1 week integration time the number of templates rapidly rises to ~=5×106. This is due to the rather poor measurements available today of the projected semi-major axis and the orbital phase of the neutron star. If, however, the same search is to be carried out with only a few filters, then more refined measurements of the orbital parameters are called for-an improvement of about three orders of magnitude in the accuracy is required. Further, we show that the five NS's (radio pulsars) for which the upper limits on the signal strength are highest require no more than a few templates each and can be targeted very cheaply in terms of CPU time. Blind searches of the parameter space of orbital elements are, in general, completely un-affordable for present or near future dedicated computational resources, when the coherent integration time is of the order of the orbital period or longer. For wide binary systems, when the observation covers only a fraction of one orbit, the computational burden reduces enormously, and becomes affordable for a significant region of the parameter space.

  18. Quantum phase transitions between a class of symmetry protected topological states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, Lokman; Jiang, Hong-Chen; Lu, Yuan-Ming

    2015-07-01

    The subject of this paper is the phase transition between symmetry protected topological states (SPTs). We consider spatial dimension d and symmetry group G so that the cohomology group, Hd+1(G,U(1)), contains at least one Z2n or Z factor. We show that the phase transition between the trivial SPT and the root states that generate the Z2n or Z groups can be induced on the boundary of a (d+1)-dimensional View the MathML source-symmetric SPT by a View the MathML source symmetry breaking field. Moreover we show these boundary phase transitions can be “transplanted” to d dimensions and realized in lattice modelsmore » as a function of a tuning parameter. The price one pays is for the critical value of the tuning parameter there is an extra non-local (duality-like) symmetry. In the case where the phase transition is continuous, our theory predicts the presence of unusual (sometimes fractionalized) excitations corresponding to delocalized boundary excitations of the non-trivial SPT on one side of the transition. This theory also predicts other phase transition scenarios including first order transition and transition via an intermediate symmetry breaking phase.« less

  19. An experimental parametric study of VOC from flooring systems exposed to alkaline solutions.

    PubMed

    Sjöberg, A; Ramnäs, O

    2007-12-01

    This study outlined the influence of a number of parameters affecting the emission rate from one of the largest sources of VOC in the building stock in the Nordic countries. This source is flooring systems of polyvinyl chloride or linoleum attached to a substrate of moisture damaged or insufficiently dried concrete. The secondary emission rate of degradation products was measured, with the Field and Laboratory Emission Cell, on different flooring systems consisting of three different floorings and three adhesives, exposed to three different aqueous solutions in the range of 11-13.1 pH. The conclusion drawn in this study is that the great majority of the secondary emission originates from the floor adhesive. The occurrence of adhesive and the amount of adhesive used have a significant influence on the emission rate. A critical pH value for degradation of the adhesive seems to lie somewhere between 11 and 13 pH. When designing a floor system or a renovation of a damaged flooring system, it is important to bear in mind the influence of parameters that may drastically shorten the service life. Flooring adhesive may decompose in a moist alkaline environment and give rise to unacceptable secondary emission rates.

  20. Pre-seismic anomalies from optical satellite observations: a review

    NASA Astrophysics Data System (ADS)

    Jiao, Zhong-Hu; Zhao, Jing; Shan, Xinjian

    2018-04-01

    Detecting various anomalies using optical satellite data prior to strong earthquakes is key to understanding and forecasting earthquake activities because of its recognition of thermal-radiation-related phenomena in seismic preparation phases. Data from satellite observations serve as a powerful tool in monitoring earthquake preparation areas at a global scale and in a nearly real-time manner. Over the past several decades, many new different data sources have been utilized in this field, and progressive anomaly detection approaches have been developed. This paper reviews the progress and development of pre-seismic anomaly detection technology in this decade. First, precursor parameters, including parameters from the top of the atmosphere, in the atmosphere, and on the Earth's surface, are stated and discussed. Second, different anomaly detection methods, which are used to extract anomalous signals that probably indicate future seismic events, are presented. Finally, certain critical problems with the current research are highlighted, and new developing trends and perspectives for future work are discussed. The development of Earth observation satellites and anomaly detection algorithms can enrich available information sources, provide advanced tools for multilevel earthquake monitoring, and improve short- and medium-term forecasting, which play a large and growing role in pre-seismic anomaly detection research.

  1. A new, high-precision measurement of the X-ray Cu K α spectrum

    NASA Astrophysics Data System (ADS)

    Mendenhall, Marcus H.; Cline, James P.; Henins, Albert; Hudson, Lawrence T.; Szabo, Csilla I.; Windover, Donald

    2016-03-01

    One of the primary measurement issues addressed with NIST Standard Reference Materials (SRMs) for powder diffraction is that of line position. SRMs for this purpose are certified with respect to lattice parameter, traceable to the SI through precise measurement of the emission spectrum of the X-ray source. Therefore, accurate characterization of the emission spectrum is critical to a minimization of the error bounds on the certified parameters. The presently accepted sources for the SI traceable characterization of the Cu K α emission spectrum are those of Härtwig, Hölzer et al., published in the 1990s. The structure of the X-ray emission lines of the Cu K α complex has been remeasured on a newly commissioned double-crystal instrument, with six-bounce Si (440) optics, in a manner directly traceable to the SI definition of the meter. In this measurement, the entire region from 8020 eV to 8100 eV has been covered with a highly precise angular scale and well-defined system efficiency, providing accurate wavelengths and relative intensities. This measurement is in modest disagreement with reference values for the wavelength of the Kα1 line, and strong disagreement for the wavelength of the Kα2 line.

  2. Transparency of near-critical density plasmas under extreme laser intensities

    NASA Astrophysics Data System (ADS)

    Ji, Liangliang; Shen, Baifei; Zhang, Xiaomei

    2018-05-01

    We investigated transparency of near-critical plasma targets for highly intense incident lasers and discovered that beyond relativistic transparency, there exists an anomalous opacity regime, where the plasma target tend to be opaque at extreme light intensities. The unexpected phenomenon is found to originate from the trapping of ions under exotic conditions. We found out the propagation velocity and the amplitude of the laser-driven charge separation field in a large parameter range and derived the trapping probability of ions. The model successfully interpolates the emergence of anomalous opacity in simulations. The trend is more significant when radiation reaction comes into effect, leaving a transparency window in the intensity domain. Transparency of a plasma target defines the electron dynamics and thereby the emission mechanisms of gamma-photons in the ultra-relativistic regime. Our findings are not only of fundamental interest but also imply the proper mechanisms for generating desired electron/gamma sources.

  3. Probing ultra-fast processes with high dynamic range at 4th-generation light sources: Arrival time and intensity binning at unprecedented repetition rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovalev, S.; Green, B.; Golz, T.

    Here, understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systemsmore » and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.« less

  4. Probing ultra-fast processes with high dynamic range at 4th-generation light sources: Arrival time and intensity binning at unprecedented repetition rates

    DOE PAGES

    Kovalev, S.; Green, B.; Golz, T.; ...

    2017-03-06

    Here, understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systemsmore » and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.« less

  5. An open source GIS-based tool to integrate the fragmentation mechanism in rockfall propagation

    NASA Astrophysics Data System (ADS)

    Matas, Gerard; Lantada, Nieves; Gili, Josep A.; Corominas, Jordi

    2015-04-01

    Rockfalls are frequent instability processes in road cuts, open pit mines and quarries, steep slopes and cliffs. Even though the stability of rock slopes can be determined using analytical approaches, the assessment of large rock cliffs require simplifying assumptions due to the difficulty of working with a large amount of joints, the scattering of both the orientations and strength parameters. The attitude and persistency of joints within the rock mass define the size of kinematically unstable rock volumes. Furthermore the rock block will eventually split in several fragments during its propagation downhill due its impact with the ground surface. Knowledge of the size, energy, trajectory… of each block resulting from fragmentation is critical in determining the vulnerability of buildings and protection structures. The objective of this contribution is to present a simple and open source tool to simulate the fragmentation mechanism in rockfall propagation models and in the calculation of impact energies. This tool includes common modes of motion for falling boulders based on the previous literature. The final tool is being implemented in a GIS (Geographic Information Systems) using open source Python programming. The tool under development will be simple, modular, compatible with any GIS environment, open source, able to model rockfalls phenomena correctly. It could be used in any area susceptible to rockfalls with a previous adjustment of the parameters. After the adjustment of the model parameters to a given area, a simulation could be performed to obtain maps of kinetic energy, frequency, stopping density and passing heights. This GIS-based tool and the analysis of the fragmentation laws using data collected from recent rockfall have being developed within the RockRisk Project (2014-2016). This project is funded by the Spanish Ministerio de Economía y Competitividad and entitled "Rockfalls in cliffs: risk quantification and its prevention"(BIA2013-42582-P).

  6. Criticality features in ULF magnetic fields prior to the 2011 Tohoku earthquake

    PubMed Central

    HAYAKAWA, Masashi; SCHEKOTOV, Alexander; POTIRAKIS, Stelios; EFTAXIAS, Kostas

    2015-01-01

    The criticality of ULF (Ultra-low-frequency) magnetic variations is investigated for the 2011 March 11 Tohoku earthquake (EQ) by natural time analysis. For this attempt, some ULF parameters were considered: (1) Fh (horizontal magnetic field), (2) Fz (vertical magnetic field), and (3) Dh (inverse of horizontal magnetic field). The first two parameters refer to the ULF radiation, while the last parameter refers to another ULF effect of ionospheric signature. Nighttime (L.T. = 3 am ± 2 hours) data at Kakioka (KAK) were used, and the power of each quantity at a particular frequency band of 0.03–0.05 Hz was averaged for nighttime hours. The analysis results indicate that Fh fulfilled all criticality conditions on March 3–5, 2011, and that the additional parameter, Dh reached also a criticality on March 6 or 7. In conclusion, criticality has reached in the pre-EQ fracture region a few days to one week before the main shock of the Tohoku EQ. PMID:25743063

  7. Critical levels as applied to ozone for North American forests

    Treesearch

    Robert C. Musselman

    2006-01-01

    The United States and Canada have used concentration-based parameters for air quality standards for ozone effects on forests in North America. The European critical levels method for air quality standards uses an exposure-based parameter, a cumulative ozone concentration index with a threshold cutoff value. The critical levels method has not been used in North America...

  8. Hydrothermal carbonization of Opuntia ficus-indica cladodes: Role of process parameters on hydrochar properties.

    PubMed

    Volpe, Maurizio; Goldfarb, Jillian L; Fiori, Luca

    2018-01-01

    Opuntia ficus-indica cladodes are a potential source of solid biofuel from marginal, dry land. Experiments assessed the effects of temperature (180-250°C), reaction time (0.5-3h) and biomass to water ratio (B/W; 0.07-0.30) on chars produced via hydrothermal carbonization. Multivariate linear regression demonstrated that the three process parameters are critically important to hydrochar solid yield, while B/W drives energy yield. Heating value increased together with temperature and reaction time and was maximized at intermediate B/W (0.14-0.20). Microscopy shows evidence of secondary char formed at higher temperatures and B/W ratios. X-ray diffraction, thermogravimetric data, microscopy and inductively coupled plasma mass spectrometry suggest that calcium oxalate in the raw biomass remains in the hydrochar; at higher temperatures, the mineral decomposes into CO 2 and may catalyze char/tar decomposition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. On a criterion of incipient motion and entrainment into suspension of a particle from cuttings bed in shear flow of non-Newtonian fluid

    NASA Astrophysics Data System (ADS)

    Ignatenko, Yaroslav; Bocharov, Oleg; May, Roland

    2017-10-01

    Solids transport is a major issue in high angle wells. Bed-load forms by sediment while transport and accompanied by intermittent contact with stream-bed by rolling, sliding and bouncing. The study presents the results of a numerical simulation of a laminar steady-state flow around a particle at rest and in free motion in a shear flow of Herschel-Bulkley fluid. The simulation was performed using the OpenFOAM open-source CFD package. A criterion for particle incipient motion and entrainment into suspension from cuttings bed (Shields criteria) based on forces and torques balance is discussed. Deflection of the fluid parameters from the ones of Newtonian fluid leads to decreasing of the drag and lift forces and the hydrodynamic moment. Thus, the critical shear stress (Shields parameter) for the considered non-Newtonian fluid must be greater than the one for a Newtonian fluid.

  10. High-Sensitivity GaN Microchemical Sensors

    NASA Technical Reports Server (NTRS)

    Son, Kyung-ah; Yang, Baohua; Liao, Anna; Moon, Jeongsun; Prokopuk, Nicholas

    2009-01-01

    Systematic studies have been performed on the sensitivity of GaN HEMT (high electron mobility transistor) sensors using various gate electrode designs and operational parameters. The results here show that a higher sensitivity can be achieved with a larger W/L ratio (W = gate width, L = gate length) at a given D (D = source-drain distance), and multi-finger gate electrodes offer a higher sensitivity than a one-finger gate electrode. In terms of operating conditions, sensor sensitivity is strongly dependent on transconductance of the sensor. The highest sensitivity can be achieved at the gate voltage where the slope of the transconductance curve is the largest. This work provides critical information about how the gate electrode of a GaN HEMT, which has been identified as the most sensitive among GaN microsensors, needs to be designed, and what operation parameters should be used for high sensitivity detection.

  11. Season-ahead water quality forecasts for the Schuylkill River, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Leung, K.

    2013-12-01

    Anticipating and preparing for elevated water quality parameter levels in critical water sources, using weather forecasts, is not uncommon. In this study, we explore the feasibility of extending this prediction scale to a season-ahead for the Schuylkill River in Philadelphia, utilizing both statistical and dynamical prediction models, to characterize the season. This advance information has relevance for recreational activities, ecosystem health, and water treatment, as the Schuylkill provides 40% of Philadelphia's water supply. The statistical model associates large-scale climate drivers with streamflow and water quality parameter levels; numerous variables from NOAA's CFSv2 model are evaluated for the dynamical approach. A multi-model combination is also assessed. Results indicate moderately skillful prediction of average summertime total coliform and wintertime turbidity, using season-ahead oceanic and atmospheric variables, predominantly from the North Atlantic Ocean. Models predicting the number of elevated turbidity events across the wintertime season are also explored.

  12. Estimability of geodetic parameters from space VLBI observables

    NASA Technical Reports Server (NTRS)

    Adam, Jozsef

    1990-01-01

    The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.

  13. Evaluation of deep moonquake source parameters: Implication for fault characteristics and thermal state

    NASA Astrophysics Data System (ADS)

    Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi

    2017-07-01

    While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.

  14. Quantum critical fluctuations in disordered d-wave superconductors.

    PubMed

    Meyer, Julia S; Gornyi, Igor V; Altland, Alexander

    2003-03-14

    To explain the strong quasiparticle damping in the cuprates, Sachdev and collaborators proposed to couple the system to a critically fluctuating id(xy)- or is-order parameter mode. Here we generalize the approach to the presence of static disorder. In the id case, the order parameter dynamics becomes diffusive, but otherwise much of the phenomenology of the clean case remains intact. In contrast, the interplay of disorder and is-order parameter fluctuations leads to a secondary superconductor transition, with a critical temperature exponentially sensitive to the impurity concentration.

  15. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  16. The Relationship Between Constraint and Ductile Fracture Initiation as Defined by Micromechanical Analyses

    NASA Technical Reports Server (NTRS)

    Panontin, Tina L.; Sheppard, Sheri D.

    1994-01-01

    The use of small laboratory specimens to predict the integrity of large, complex structures relies on the validity of single parameter fracture mechanics. Unfortunately, the constraint loss associated with large scale yielding, whether in a laboratory specimen because of its small size or in a structure because it contains shallow flaws loaded in tension, can cause the breakdown of classical fracture mechanics and the loss of transferability of critical, global fracture parameters. Although the issue of constraint loss can be eliminated by testing actual structural configurations, such an approach can be prohibitively costly. Hence, a methodology that can correct global fracture parameters for constraint effects is desirable. This research uses micromechanical analyses to define the relationship between global, ductile fracture initiation parameters and constraint in two specimen geometries (SECT and SECB with varying a/w ratios) and one structural geometry (circumferentially cracked pipe). Two local fracture criteria corresponding to ductile fracture micromechanisms are evaluated: a constraint-modified, critical strain criterion for void coalescence proposed by Hancock and Cowling and a critical void ratio criterion for void growth based on the Rice and Tracey model. Crack initiation is assumed to occur when the critical value in each case is reached over some critical length. The primary material of interest is A516-70, a high-hardening pressure vessel steel sensitive to constraint; however, a low-hardening structural steel that is less sensitive to constraint is also being studied. Critical values of local fracture parameters are obtained by numerical analysis and experimental testing of circumferentially notched tensile specimens of varying constraint (e.g., notch radius). These parameters are then used in conjunction with large strain, large deformation, two- and three-dimensional finite element analyses of the geometries listed above to predict crack initiation loads and to calculate the associated (critical) global fracture parameters. The loads are verified experimentally, and microscopy is used to measure pre-crack length, crack tip opening displacement (CTOD), and the amount of stable crack growth. Results for A516-70 steel indicate that the constraint-modified, critical strain criterion with a critical length approximately equal to the grain size (0.0025 inch) provides accurate predictions of crack initiation. The critical void growth criterion is shown to considerably underpredict crack initiation loads with the same critical length. The relationship between the critical value of the J-integral for ductile crack initiation and crack depth for SECT and SECB specimens has been determined using the constraint-modified, critical strain criterion, demonstrating that this micromechanical model can be used to correct in-plane constraint effects due to crack depth and bending vs. tension loading. Finally, the relationship developed for the SECT specimens is used to predict the behavior of circumferentially cracked pipe specimens.

  17. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  18. Infrasonic crackle and supersonic jet noise from the eruption of Nabro Volcano, Eritrea

    NASA Astrophysics Data System (ADS)

    Fee, David; Matoza, Robin S.; Gee, Kent L.; Neilsen, Tracianne B.; Ogden, Darcy E.

    2013-08-01

    The lowermost portion of an explosive volcanic eruption column is considered a momentum-driven jet. Understanding volcanic jets is critical for determining eruption column dynamics and mitigating volcanic hazards; however, volcanic jets are inherently difficult to observe due to their violence and opacity. Infrasound from the 2011 eruption of Nabro Volcano, Eritrea has waveform features highly similar to the "crackle" phenomenon uniquely produced by man-made supersonic jet engines and rockets and is characterized by repeated asymmetric compressions followed by weaker, gradual rarefactions. This infrasonic crackle indicates that infrasound source mechanisms in sustained volcanic eruptions are strikingly similar to jet noise sources from heated, supersonic jet engines and rockets, suggesting that volcanologists can utilize the modeling and physical understandings of man-made jets to understand volcanic jets. The unique, distinctive infrasonic crackle from Nabro highlights the use of infrasound to remotely detect and characterize hazardous eruptions and its potential to determine volcanic jet parameters.

  19. Cosmology, Cosmomicrophysics and Gravitation Properties of the Gravitational Lens Mapping in the Vicinity of a Cusp Caustic

    NASA Astrophysics Data System (ADS)

    Alexandrov, A. N.; Zhdanov, V. I.; Koval, S. M.

    We derive approximate formulas for the coordinates and magnification of critical images of a point source in a vicinity of a cusp caustic arising in the gravitational lens mapping. In the lowest (zero-order) approximation, these formulas were obtained in the classical work by Schneider&Weiss (1992) and then studied by a number of authors; first-order corrections in powers of the proximity parameter were treated by Congdon, Keeton and Nordgren. We have shown that the first-order corrections are solely due to the asymmetry of the cusp. We found expressions for the second-order corrections in the case of general lens potential and for an arbitrary position of the source near a symmetric cusp. Applications to a lensing galaxy model represented by a singular isothermal sphere with an external shear y are studied and the role of the second-order corrections is demonstrated.

  20. Utility of correlation techniques in gravity and magnetic interpretation

    NASA Technical Reports Server (NTRS)

    Chandler, V. W.; Koski, J. S.; Braice, L. W.; Hinze, W. J.

    1977-01-01

    Internal correspondence uses Poisson's Theorem in a moving-window linear regression analysis between the anomalous first vertical derivative of gravity and total magnetic field reduced to the pole. The regression parameters provide critical information on source characteristics. The correlation coefficient indicates the strength of the relation between magnetics and gravity. Slope value gives delta j/delta sigma estimates of the anomalous source. The intercept furnishes information on anomaly interference. Cluster analysis consists of the classification of subsets of data into groups of similarity based on correlation of selected characteristics of the anomalies. Model studies are used to illustrate implementation and interpretation procedures of these methods, particularly internal correspondence. Analysis of the results of applying these methods to data from the midcontinent and a transcontinental profile shows they can be useful in identifying crustal provinces, providing information on horizontal and vertical variations of physical properties over province size zones, validating long wavelength anomalies, and isolating geomagnetic field removal problems.

  1. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  2. Recent updates in developing a statistical pseudo-dynamic source-modeling framework to capture the variability of earthquake rupture scenarios

    NASA Astrophysics Data System (ADS)

    Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee

    2017-04-01

    It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.

  3. Qualification Testing of Laser Diode Pump Arrays for a Space-Based 2-micron Coherent Doppler Lidar

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Meadows, Byron L.; Baker, Nathaniel R.; Barnes, Bruce W.; Singh, Upendra N.; Kavaya, Michael J.

    2007-01-01

    The 2-micron thulium and holmium-based lasers being considered as the transmitter source for space-based coherent Doppler lidar require high power laser diode pump arrays operating in a long pulse regime of about 1 msec. Operating laser diode arrays over such long pulses drastically impact their useful lifetime due to the excessive localized heating and substantial pulse-to-pulse thermal cycling of their active regions. This paper describes the long pulse performance of laser diode arrays and their critical thermal characteristics. A viable approach is then offered that allows for determining the optimum operational parameters leading to the maximum attainable lifetime.

  4. Implications of arcing due to spacecraft charging on spacecraft EMI margins of immunity

    NASA Technical Reports Server (NTRS)

    Inouye, G. T.

    1981-01-01

    Arcing due to spacecraft charging on spacecraft EMI margins of immunity was determined. The configuration of the P78-2 spacecraft of the SCATHA program was analyzed. A brushfire arc discharge model was developed, and a technique for initiating discharges with a spark plug trigger was for data configuration. A set of best estimate arc discharge parameters was defined. The effects of spacecraft potentials in limiting the discharge current blowout component are included. Arc discharge source models were incorporated into a SEMCAP EMI coupling analysis code for the DSP spacecraft. It is shown that with no mission critical circuits will be affected.

  5. Performance of a Diaphragmed Microlens for a Packaged Microspectrometer

    PubMed Central

    Lo, Joe; Chen, Shih-Jui; Fang, Qiyin; Papaioannou, Thanassis; Kim, Eun-Sok; Gundersen, Martin; Marcu, Laura

    2009-01-01

    This paper describes the design, fabrication, packaging and testing of a microlens integrated in a multi-layered MEMS microspectrometer. The microlens was fabricated using modified PDMS molding to form a suspended lens diaphragm. Gaussian beam propagation model was used to measure the focal length and quantify M2 value of the microlens. A tunable calibration source was set up to measure the response of the packaged device. Dual wavelength separation by the packaged device was demonstrated by CCD imaging and beam profiling of the spectroscopic output. We demonstrated specific techniques to measure critical parameters of microoptics systems for future optimization of spectroscopic devices. PMID:22399943

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weigand, Steven J.; Keane, Denis T.

    The DuPont-Northwestern-Dow Collaborative Access Team (DND-CAT) built and currently manages sector 5 at the Advanced Photon Source (APS), Argonne National Laboratory. One of the principal techniques supported by DND-CAT is Small and Wide-Angle X-ray Scattering (SAXS/WAXS), with an emphasis on simultaneous data collection over a wide azimuthal and reciprocal space range using a custom SAXS/WAXS detector system. A new triple detector system is now in development, and we describe the key parameters and characteristics of the new instrument, which will be faster, more flexible, more robust, and will improve q-space resolution in a critical reciprocal space regime between the traditionalmore » WAXS and SAXS ranges.« less

  7. Filter-Adapted Fluorescent In Situ Hybridization (FA-FISH) for Filtration-Enriched Circulating Tumor Cells.

    PubMed

    Oulhen, Marianne; Pailler, Emma; Faugeroux, Vincent; Farace, Françoise

    2017-01-01

    Circulating tumor cells (CTCs) may represent an easily accessible source of tumor material to assess genetic aberrations such as gene-rearrangements or gene-amplifications and screen cancer patients eligible for targeted therapies. As the number of CTCs is a critical parameter to identify such biomarkers, we developed fluorescent in situ hybridization (FISH) for CTCs enriched on filters (filter-adapted-FISH, FA-FISH). Here, we describe the FA-FISH protocol, the combination of immunofluorescent staining (DAPI/CD45) and FA-FISH techniques, as well as the semi-automated microscopy method that we developed to improve the feasibility and reliability of FISH analyses in filtration-enriched CTC.

  8. Application of TiN/TiO2 coatings on stainless steel: composition and mechanical reliability

    NASA Astrophysics Data System (ADS)

    Nikolova, M. P.; Genov, A.; Valkov, S.; Yankov, E.; Dechev, D.; Ivanov, N.; Bezdushnyi, R.; Petrov, P.

    2018-03-01

    The paper reports on the effect of the substrate temperature (350 °C, 380 °C and 420 °C) during reactive magnetron sputtering of a TiN film on the phase composition, texture and mechanical properties of TiN/TiO2 coatings on 304L stainless steel substrates. Pure Ti was used as a cathode source of Ti. The texture and unit cell parameters of both TiN and TiO2 phases of the coating are discussed in relation with the tribological properties and adhesion of the coating. The scratch tests performed showed that the nitride deposited at 380 °C, having the highest unit cell parameter and a predominant (111) texture, possessed the lowest friction coefficient (μ), tangential force and brittleness. The anatase-type TiO2 with predominant (101) pole density and increased c unit cell parameter showed the highest stability on the nitride deposited at 420 °C. The results indicated that the friction coefficient, tangential force and critical forces of fracture could be varied by controlling the coating deposition temperature.

  9. Method of boundary testing of the electric circuits and its application for calculating electric tolerances. [electric equipment tests

    NASA Technical Reports Server (NTRS)

    Redkina, N. P.

    1974-01-01

    Boundary testing of electric circuits includes preliminary and limiting tests. Preliminary tests permit determination of the critical parameters causing the greatest deviation of the output parameter of the system. The boundary tests offer the possibility of determining the limits of the fitness of the system with simultaneous variation of its critical parameters.

  10. Workflow for Criticality Assessment Applied in Biopharmaceutical Process Validation Stage 1.

    PubMed

    Zahel, Thomas; Marschall, Lukas; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Mueller, Eric M; Murphy, Patrick; Natschläger, Thomas; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-12

    Identification of critical process parameters that impact product quality is a central task during regulatory requested process validation. Commonly, this is done via design of experiments and identification of parameters significantly impacting product quality (rejection of the null hypothesis that the effect equals 0). However, parameters which show a large uncertainty and might result in an undesirable product quality limit critical to the product, may be missed. This might occur during the evaluation of experiments since residual/un-modelled variance in the experiments is larger than expected a priori. Estimation of such a risk is the task of the presented novel retrospective power analysis permutation test. This is evaluated using a data set for two unit operations established during characterization of a biopharmaceutical process in industry. The results show that, for one unit operation, the observed variance in the experiments is much larger than expected a priori, resulting in low power levels for all non-significant parameters. Moreover, we present a workflow of how to mitigate the risk associated with overlooked parameter effects. This enables a statistically sound identification of critical process parameters. The developed workflow will substantially support industry in delivering constant product quality, reduce process variance and increase patient safety.

  11. Molecular dynamics study of combustion reactions in supercritical environment. Part 1: Carbon dioxide and water force field parameters refitting and critical isotherms of binary mixtures

    DOE PAGES

    Masunov, Artem E.; Atlanov, Arseniy Alekseyevich; Vasu, Subith S.

    2016-10-04

    Oxy-fuel combustion process is expected to drastically increase the energy efficiency and enable easy carbon sequestration. In this technology the combustion products (carbon dioxide and water) are used to control the temperature and nitrogen is excluded from the combustion chamber, so that nitrogen oxide pollutants do not form. Therefore, in oxycombustion the carbon dioxide and water are present in large concentrations in their transcritical state, and may play an important role in kinetics. The computational chemistry methods may assist in understanding these effects, and Molecular Dynamics with ReaxFF force field seem to be a suitable tool for such a study.more » Here we investigate applicability of the ReaxFF to describe the critical phenomena in carbon dioxide and water and find that several nonbonding parameters need adjustment. We report the new parameter set, capable to reproduce the critical temperatures and pressures. Furthermore, the critical isotherms of CO 2/H 2O binary mixtures are computationally studied here for the first time and their critical parameters are reported.« less

  12. Molecular dynamics study of combustion reactions in supercritical environment. Part 1: Carbon dioxide and water force field parameters refitting and critical isotherms of binary mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masunov, Artem E.; Atlanov, Arseniy Alekseyevich; Vasu, Subith S.

    Oxy-fuel combustion process is expected to drastically increase the energy efficiency and enable easy carbon sequestration. In this technology the combustion products (carbon dioxide and water) are used to control the temperature and nitrogen is excluded from the combustion chamber, so that nitrogen oxide pollutants do not form. Therefore, in oxycombustion the carbon dioxide and water are present in large concentrations in their transcritical state, and may play an important role in kinetics. The computational chemistry methods may assist in understanding these effects, and Molecular Dynamics with ReaxFF force field seem to be a suitable tool for such a study.more » Here we investigate applicability of the ReaxFF to describe the critical phenomena in carbon dioxide and water and find that several nonbonding parameters need adjustment. We report the new parameter set, capable to reproduce the critical temperatures and pressures. Furthermore, the critical isotherms of CO 2/H 2O binary mixtures are computationally studied here for the first time and their critical parameters are reported.« less

  13. Simulation models in population breast cancer screening: A systematic review.

    PubMed

    Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H

    2015-08-01

    The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Outburst of GX304-1 Monitored with INTEGRAL: Positive Correlation Between the Cyclotron Line Energy and Flux

    NASA Technical Reports Server (NTRS)

    Klochkov, D.; Doroshenko, V.; Santangelo, A.; Staubert, R.; Ferrigno, C.; Kretschmar, P.; Caballero, I.; Wilms, J.; Kreykenbohm, I.; Pottschmidt, I.; hide

    2012-01-01

    Context. X-ray spectra of many accreting pulsars exhibit significant variations as a function of flux and thus of mass accretion rate. In some of these pulsars, the centroid energy of the cyclotron line(s), which characterizes the magnetic field strength at the site of the X-ray emission, has been found to vary systematically with flux. Aims. GX304-1 is a recently established cyclotron line source with a line energy around 50 keV. Since 2009, the pulsar shows regular outbursts with the peak flux exceeding one Crab. We analyze the INTEGRAL observations of the source during its outburst in January-February 2012. Methods. The observations covered almost the entire outburst, allowing us to measure the source's broad-band X-my spectrum at different flux levels. We report on the variations in the spectral parameters with luminosity and focus on the variations in the cyclotron line. Results. The centroid energy of the line is found to be positively correlated with the luminosity. We interpret this result as a manifestation of the local sub-Eddington (sub-critical) accretion regime operating in the source.

  15. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    NASA Astrophysics Data System (ADS)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  16. Critical point and phase behavior of the pure fluid and a Lennard-Jones mixture

    NASA Astrophysics Data System (ADS)

    Potoff, Jeffrey J.; Panagiotopoulos, Athanassios Z.

    1998-12-01

    Monte Carlo simulations in the grand canonical ensemble were used to obtain liquid-vapor coexistence curves and critical points of the pure fluid and a binary mixture of Lennard-Jones particles. Critical parameters were obtained from mixed-field finite-size scaling analysis and subcritical coexistence data from histogram reweighting methods. The critical parameters of the untruncated Lennard-Jones potential were obtained as Tc*=1.3120±0.0007, ρc*=0.316±0.001 and pc*=0.1279±0.0006. Our results for the critical temperature and pressure are not in agreement with the recent study of Caillol [J. Chem. Phys. 109, 4885 (1998)] on a four-dimensional hypersphere. Mixture parameters were ɛ1=2ɛ2 and σ1=σ2, with Lorentz-Berthelot combining rules for the unlike-pair interactions. We determined the critical point at T*=1.0 and pressure-composition diagrams at three temperatures. Our results have much smaller statistical uncertainties relative to comparable Gibbs ensemble simulations.

  17. Optimal critic learning for robot control in time-varying environments.

    PubMed

    Wang, Chen; Li, Yanan; Ge, Shuzhi Sam; Lee, Tong Heng

    2015-10-01

    In this paper, optimal critic learning is developed for robot control in a time-varying environment. The unknown environment is described as a linear system with time-varying parameters, and impedance control is employed for the interaction control. Desired impedance parameters are obtained in the sense of an optimal realization of the composite of trajectory tracking and force regulation. Q -function-based critic learning is developed to determine the optimal impedance parameters without the knowledge of the system dynamics. The simulation results are presented and compared with existing methods, and the efficacy of the proposed method is verified.

  18. Geometric parameter analysis to predetermine optimal radiosurgery technique for the treatment of arteriovenous malformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia

    2005-11-01

    Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less

  19. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  20. Critical loads of nitrogen deposition and critical levels of atmospheric ammonia for mediterranean evergreen woodlands

    NASA Astrophysics Data System (ADS)

    Pinho, P.; Theobald, M. R.; Dias, T.; Tang, Y. S.; Cruz, C.; Martins-Loução, M. A.; Máguas, C.; Sutton, M.; Branquinho, C.

    2011-11-01

    Nitrogen (N) has emerged in recent years as a key factor associated with global changes, with impacts on biodiversity, ecosystems functioning and human health. In order to ameliorate the effects of excessive N, safety thresholds have been established, such as critical loads (deposition fluxes) and levels (concentrations). For Mediterranean ecosystems, few studies have been carried out to assess these parameters. Our objective was therefore to determine the critical loads of N deposition and long-term critical levels of atmospheric ammonia for Mediterranean evergreen woodlands. For that we have considered changes in epiphytic lichen communities, which have been shown to be one of the most sensitive to excessive N. Based on a classification of lichen species according to their tolerance to N we grouped species into response functional groups, which we used as a tool to determine the critical loads and levels. This was done under Mediterranean climate, in evergreen cork-oak woodlands, by sampling lichen functional diversity and annual atmospheric ammonia concentrations and modelling N deposition downwind from a reduced N source (a cattle barn). By modelling the highly significant relationship between lichen functional groups and N deposition, the critical load was estimated to be below 26 kg (N) ha-1 yr-1, which is within the upper range established for other semi-natural ecosystems. By modelling the highly significant relationship of lichen functional groups with annual atmospheric ammonia concentration, the critical level was estimated to be below 1.9 μg m-3, in agreement with recent studies for other ecosystems. Taking into account the high sensitivity of lichen communities to excessive N, these values should be taken into account in policies that aim at protecting Mediterranean woodlands from the initial effects of excessive N.

  1. Application of Spatial Data Modeling and Geographical Information Systems (GIS) for Identification of Potential Siting Options for Various Electrical Generation Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mays, Gary T; Belles, Randy; Blevins, Brandon R

    2012-05-01

    Oak Ridge National Laboratory (ORNL) initiated an internal National Electric Generation Siting Study, which is an ongoing multiphase study addressing several key questions related to our national electrical energy supply. This effort has led to the development of a tool, OR-SAGE (Oak Ridge Siting Analysis for power Generation Expansion), to support siting evaluations. The objective in developing OR-SAGE was to use industry-accepted approaches and/or develop appropriate criteria for screening sites and employ an array of Geographic Information Systems (GIS) data sources at ORNL to identify candidate areas for a power generation technology application. The initial phase of the study examinedmore » nuclear power generation. These early nuclear phase results were shared with staff from the Electric Power Research Institute (EPRI), which formed the genesis and support for an expansion of the work to several other power generation forms, including advanced coal with carbon capture and storage (CCS), solar, and compressed air energy storage (CAES). Wind generation was not included in this scope of work for EPRI. The OR-SAGE tool is essentially a dynamic visualization database. The results shown in this report represent a single static set of results using a specific set of input parameters. In this case, the GIS input parameters were optimized to support an economic study conducted by EPRI. A single set of individual results should not be construed as an ultimate energy solution, since US energy policy is very complex. However, the strength of the OR-SAGE tool is that numerous alternative scenarios can be quickly generated to provide additional insight into electrical generation or other GIS-based applications. The screening process divides the contiguous United States into 100 x 100 m (1-hectare) squares (cells), applying successive power generation-appropriate site selection and evaluation criteria (SSEC) to each cell. There are just under 700 million cells representing the contiguous United States. If a cell meets the requirements of each criterion, the cell is deemed a candidate area for siting a specific power generation form relative to a reference plant for that power type. Some SSEC parameters preclude siting a power plant because of an environmental, regulatory, or land-use constraint. Other SSEC assist in identifying less favorable areas, such as proximity to hazardous operations. All of the selected SSEC tend to recommend against sites. The focus of the ORNL electrical generation source siting study is on identifying candidate areas from which potential sites might be selected, stopping short of performing any detailed site evaluations or comparisons. This approach is designed to quickly screen for and characterize candidate areas. Critical assumptions supporting this work include the supply of cooling water to thermoelectric power generation; a methodology to provide an adequate siting footprint for typical power plant applications; a methodology to estimate thermoelectric plant capacity while accounting for available cooling water; and a methodology to account for future ({approx}2035) siting limitations as population increases and demands on freshwater sources change. OR-SAGE algorithms were built to account for these critical assumptions. Stream flow is the primary thermoelectric plant cooling source evaluated in this study. All cooling was assumed to be provided by a closed-cycle cooling (CCC) system requiring makeup water to account for evaporation and blowdown. Limited evaluations of shoreline cooling and the use of municipal processed water (gray) cooling were performed. Using a representative set of SSEC as input to the OR-SAGE tool and employing the accompanying critical assumptions, independent results for the various power generation sources studied were calculated.« less

  2. Single-event burnout hardening of planar power MOSFET with partially widened trench source

    NASA Astrophysics Data System (ADS)

    Lu, Jiang; Liu, Hainan; Cai, Xiaowu; Luo, Jiajun; Li, Bo; Li, Binhong; Wang, Lixin; Han, Zhengsheng

    2018-03-01

    We present a single-event burnout (SEB) hardened planar power MOSFET with partially widened trench sources by three-dimensional (3D) numerical simulation. The advantage of the proposed structure is that the work of the parasitic bipolar transistor inherited in the power MOSFET is suppressed effectively due to the elimination of the most sensitive region (P-well region below the N+ source). The simulation result shows that the proposed structure can enhance the SEB survivability significantly. The critical value of linear energy transfer (LET), which indicates the maximum deposited energy on the device without SEB behavior, increases from 0.06 to 0.7 pC/μm. The SEB threshold voltage increases to 120 V, which is 80% of the rated breakdown voltage. Meanwhile, the main parameter characteristics of the proposed structure remain similar with those of the conventional planar structure. Therefore, this structure offers a potential optimization path to planar power MOSFET with high SEB survivability for space and atmospheric applications. Project supported by the National Natural Science Foundation of China (Nos. 61404161, 61404068, 61404169).

  3. Bridgman-type apparatus for the study of growth-property relationships - Arsenic vapor pressure-GaAs property relationship

    NASA Technical Reports Server (NTRS)

    Parsey, J. M.; Nanishi, Y.; Lagowski, J.; Gatos, H. C.

    1982-01-01

    A precision Bridgman-type apparatus is described which was designed and constructed for the investigation of relationships between crystal growth parameters and the properties of GaAs crystals. Several key features of the system are highlighted, such as the use of a heat pipe for precise arsenic vapor pressure control and seeding without the presence of a viewing window. Pertinent growth parameters, such as arsenic source temperature, thermal gradients in the growing crystal and in the melt, and the macroscopic growth velocity can be independently controlled. During operation, thermal stability better than + or - 0.02 C is realized; thermal gradients can be varied up to 30 C/cm in the crystal region, and up to 20 C/cm in the melt region; the macroscopic growth velocity can be varied from 50 microns/hr to 6.0 cm/hr. It was found that the density of dislocations depends critically on As partial pressure; and essentially dislocation-free, undoped, crystals were grown under As pressure precisely controlled by an As source maintained at 617 C. The free carrier concentration varied with As pressure variations. This variation in free carrier concentration was found to be associated with variations in the compensation ratio rather than with standard segregation phenomena.

  4. Simple luminescence detectors using a light-emitting diode or a Xe lamp, optical fiber and charge-coupled device, or photomultiplier for determining proteins in capillary electrophoresis: a critical comparison.

    PubMed

    Casado-Terrones, Silvia; Fernández-Sánchez, Jorge F; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto

    2007-06-01

    The performance of two homemade fluorescence-induced capillary electrophoresis detectors, one based on light-emitting diode (LED) as the excitation source and a charge-coupled device (CCD) photodetector and the other based on a commercial luminescence spectrometer (Xe lamp) as the excitation source and a photomultiplier tube as a detector, were compared for the determination of fluorescent proteins R-phycoerythrin and B-phycoerythrin. Both devices use commercially available, reasonably priced optical components that can be used by nonexperts. After fine optimization of several optical and separation parameters in both devices, a zone capillary electrophoresis methodology was achieved with 50mM borate buffer (pH 8.4) and 10mM phytic acid for the determination of two phycobiliproteins. Detection limits of 0.50 and 0.64microg/ml for R-phycoerythrin and B-phycoerythrin, respectively, were achieved by using the LED-induced fluorescence capillary electrophoresis (LED-IF-CE) system, and corresponding detection limits of 2.73 and 2.16microg/ml were achieved by using the Xe lamp-IF-CE system. Analytical performance and other parameters, such as cost and potential to miniaturization, are compared for both devices.

  5. P-V criticality of conformal gravity holography in four dimensions

    NASA Astrophysics Data System (ADS)

    Pradhan, Parthapratim

    2018-02-01

    We examine the critical behavior, i.e. P-V criticality of conformal gravity (CG) in an extended phase space in which the cosmological constant should be interpreted as a thermodynamic pressure and the corresponding conjugate quantity as a thermodynamic volume. The main potential point of interest in CG is that there exists a nontrivial Rindler parameter (a) in the spacetime geometry. This geometric parameter has an important role to construct a model for gravity at large distances where the parameter “a” actually originates. We also investigate the effect of the said parameter on the black hole (BH) thermodynamic equation of state, critical constants, Reverse Isoperimetric Inequality, first law of thermodynamics, Hawking-Page phase transition and Gibbs free energy for this BH. We speculate that due to the presence of the said parameter, there has been a deformation in the shape of the isotherms in the P-V diagram in comparison with the charged-anti de Sitter (AdS) BH and the chargeless-AdS BH. Interestingly, we find that the critical ratio for this BH is ρc = Pcvc Tc = 3 2 32 ‑ 23, which is greater than the charged AdS BH and Schwarzschild-AdS BH, i.e. ρcCG : ρ cSch-AdS : ρ cRN-AdS = 0.67 : 0.50 : 0.37. The symbols are defined in the main work. Moreover, we observe that the critical ratio has a constant value and it is independent of the nontrivial Rindler parameter (a). Finally, we derive the reduced equation of state in terms of the reduced temperature, the reduced volume and the reduced pressure, respectively.

  6. A hierarchical modeling approach to estimate regional acute health effects of particulate matter sources

    PubMed Central

    Krall, J. R.; Hackstadt, A. J.; Peng, R. D.

    2017-01-01

    Exposure to particulate matter (PM) air pollution has been associated with a range of adverse health outcomes, including cardiovascular disease (CVD) hospitalizations and other clinical parameters. Determining which sources of PM, such as traffic or industry, are most associated with adverse health outcomes could help guide future recommendations aimed at reducing harmful pollution exposure for susceptible individuals. Information obtained from multisite studies, which is generally more precise than information from a single location, is critical to understanding how PM impacts health and to informing local strategies for reducing individual-level PM exposure. However, few methods exist to perform multisite studies of PM sources, which are not generally directly observed, and adverse health outcomes. We developed SHARE, a hierarchical modeling approach that facilitates reproducible, multisite epidemiologic studies of PM sources. SHARE is a two-stage approach that first summarizes information about PM sources across multiple sites. Then, this information is used to determine how community-level (i.e. county- or city-level) health effects of PM sources should be pooled to estimate regional-level health effects. SHARE is a type of population value decomposition that aims to separate out regional-level features from site-level data. Unlike previous approaches for multisite epidemiologic studies of PM sources, the SHARE approach allows the specific PM sources identified to vary by site. Using data from 2000–2010 for 63 northeastern US counties, we estimated regional-level health effects associated with short-term exposure to major types of PM sources. We found PM from secondary sulfate, traffic, and metals sources was most associated with CVD hospitalizations. PMID:28098412

  7. A design of experiments test to define critical spray cleaning parameters for Brulin 815 GD and Jettacin cleaners

    NASA Technical Reports Server (NTRS)

    Keen, Jill M.; Evans, Kurt B.; Schiffman, Robert L.; Deweese, C. Darrell; Prince, Michael E.

    1995-01-01

    Experimental design testing was conducted to identify critical parameters of an aqueous spray process intended for cleaning solid rocket motor metal components (steel and aluminum). A two-level, six-parameter, fractional factorial matrix was constructed and conducted for two cleaners, Brulin 815 GD and Diversey Jettacin. The matrix parameters included cleaner temperature and concentration, wash density, wash pressure, rinse pressure, and dishwasher type. Other spray parameters: nozzle stand-off, rinse water temperature, wash and rinse time, dry conditions, and type of rinse water (deionized) were held constant. Matrix response testing utilized discriminating bond specimens (fracture energy and tensile adhesion strength) which represent critical production bond lines. Overall, Jettacin spray cleaning was insensitive to the range of conditions tested for all parameters and exhibited bond strengths significantly above the TCA test baseline for all bond lines tested. Brulin 815 was sensitive to cleaning temperature, but produced bond strengths above the TCA test baseline even at the lower temperatures. Ultimately, the experimental design database was utilized to recommend process parameter settings for future aqueous spray cleaning characterization work.

  8. A fluidized bed technique for estimating soil critical shear stress

    USDA-ARS?s Scientific Manuscript database

    Soil erosion models, depending on how they are formulated, always have erodibilitiy parameters in the erosion equations. For a process-based model like the Water Erosion Prediction Project (WEPP) model, the erodibility parameters include rill and interrill erodibility and critical shear stress. Thes...

  9. Determination of Critical Parameters Based on the Intensity of Transmitted Light Around Gas-Liquid Interface: Critical Parameters of CO

    NASA Astrophysics Data System (ADS)

    Nakayama, Masaki; Katano, Hiroaki; Sato, Haruki

    2014-05-01

    A precise determination of the critical temperature and density for technically important fluids would be possible on the basis of the digital image for the visual observation of the phase boundary in the vicinity of the critical point since the sensitivity and resolution are higher than those of naked eyes. In addition, the digital image can avoid the personal uncertainty of an observer. A strong density gradient occurs in a sample cell at the critical point due to gravity. It was carefully assessed to determine the critical density, where the density profile in the sample cell can be observed from the luminance profile of a digital image. The density-gradient profile becomes symmetric at the critical point. One of the best fluids, whose thermodynamic properties have been measured with the highest reliability among technically important fluids, would be carbon dioxide. In order to confirm the reliability of the proposed method, the critical temperature and density of carbon dioxide were determined using the digital image. The critical temperature and density values of carbon dioxide are ( and ( kg m, respectively. The critical temperature and density values agree with the existing best values within estimated uncertainties. The reliability of the method was confirmed. The critical pressure, 7.3795 MPa, corresponding to the determined critical temperature of 304.143 K is also proposed. A new set of parameters for the vapor-pressure equation is also provided.

  10. Threshold-Switchable Particles (TSP) to Control Internal Hemorrhage

    DTIC Science & Technology

    2013-12-01

    and morphology and divided into three regimes: a 3-D gel, 2-D mat, and a 1-D thin film. They determined that the critical parameters determining...of critical physical parameters / dimensionless groups (through both simulation and experiment) such as pre-shear/mixing rate, the Weber and Ohnesorge...Capillary Pinch-Off Phase Diagram. This plot was constructed to aid in the identification of important physical parameters in blood plasma pinch-off

  11. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades.

    PubMed

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-05-08

    As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.

  12. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades

    PubMed Central

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-01-01

    As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models. PMID:28772873

  13. Black hole complementarity in gravity's rainbow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gim, Yongwan; Kim, Wontae, E-mail: yongwan89@sogang.ac.kr, E-mail: wtkim@sogang.ac.kr

    2015-05-01

    To see how the gravity's rainbow works for black hole complementary, we evaluate the required energy for duplication of information in the context of black hole complementarity by calculating the critical value of the rainbow parameter in the certain class of the rainbow Schwarzschild black hole. The resultant energy can be written as the well-defined limit for the vanishing rainbow parameter which characterizes the deformation of the relativistic dispersion relation in the freely falling frame. It shows that the duplication of information in quantum mechanics could not be allowed below a certain critical value of the rainbow parameter; however, itmore » might be possible above the critical value of the rainbow parameter, so that the consistent formulation in our model requires additional constraints or any other resolutions for the latter case.« less

  14. Wireless remote monitoring of critical facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Hanchung; Anderson, John T.; Liu, Yung Y.

    A method, apparatus, and system are provided for monitoring environment parameters of critical facilities. A Remote Area Modular Monitoring (RAMM) apparatus is provided for monitoring environment parameters of critical facilities. The RAMM apparatus includes a battery power supply and a central processor. The RAMM apparatus includes a plurality of sensors monitoring the associated environment parameters and at least one communication module for transmitting one or more monitored environment parameters. The RAMM apparatus is powered by the battery power supply, controlled by the central processor operating a wireless sensor network (WSN) platform when the facility condition is disrupted. The RAMM apparatusmore » includes a housing prepositioned at a strategic location, for example, where a dangerous build-up of contamination and radiation may preclude subsequent manned entrance and surveillance.« less

  15. Soundscape elaboration from anthrophonic adaptation of community noise

    NASA Astrophysics Data System (ADS)

    Teddy Badai Samodra, FX

    2018-03-01

    Under the situation of an urban environment, noise has been a critical issue in affecting the indoor environment. A reliable approach is required for evaluation of the community noise as one factor of anthrophonic in the urban environment. This research investigates the level of noise exposure from different community noise sources and elaborates the advantage of the noise disadvantages for soundscape innovation. Integrated building element design as a protector for noise control and speech intelligibility compliance using field experiment and MATLAB programming and modeling are also carried out. Meanwhile, for simulation analysis and building acoustic optimization, Sound Reduction-Speech Intelligibility and Reverberation Time are the main parameters for identifying tropical building model as case study object. The results show that the noise control should consider its integration with the other critical issue, thermal control, in an urban environment. The 1.1 second of reverberation time for speech activities and noise reduction more than 28.66 dBA for critical frequency (20 Hz), the speech intelligibility index could be reached more than fair assessment, 0.45. Furthermore, the environmental psychology adaptation result “Close The Opening” as the best method in high noise condition and personal adjustment as the easiest and the most adaptable way.

  16. Quantitative analysis of bloggers' collective behavior powered by emotions

    NASA Astrophysics Data System (ADS)

    Mitrović, Marija; Paltoglou, Georgios; Tadić, Bosiljka

    2011-02-01

    Large-scale data resulting from users' online interactions provide the ultimate source of information to study emergent social phenomena on the Web. From individual actions of users to observable collective behaviors, different mechanisms involving emotions expressed in the posted text play a role. Here we combine approaches of statistical physics with machine-learning methods of text analysis to study the emergence of emotional behavior among Web users. Mapping the high-resolution data from digg.com onto bipartite networks of users and their comments onto posted stories, we identify user communities centered around certain popular posts and determine emotional contents of the related comments by the emotion classifier developed for this type of text. Applied over different time periods, this framework reveals strong correlations between the excess of negative emotions and the evolution of communities. We observe avalanches of emotional comments exhibiting significant self-organized critical behavior and temporal correlations. To explore the robustness of these critical states, we design a network-automaton model on realistic network connections and several control parameters, which can be inferred from the dataset. Dissemination of emotions by a small fraction of very active users appears to critically tune the collective states.

  17. Determinants of the accuracy of nursing diagnoses: influence of ready knowledge, knowledge sources, disposition toward critical thinking, and reasoning skills.

    PubMed

    Paans, Wolter; Sermeus, Walter; Nieweg, Roos; van der Schans, Cees

    2010-01-01

    The purpose of this study was to determine how knowledge sources, ready knowledge, and disposition toward critical thinking and reasoning skills influence the accuracy of student nurses' diagnoses. A randomized controlled trial was conducted to determine the influence of knowledge sources. We used the following questionnaires: (a) knowledge inventory, (b) California Critical Thinking Disposition Inventory, and (c) Health Science Reasoning Test (HSRT). The use of knowledge sources had very little influence on the accuracy of nursing diagnoses. Accuracy was significantly related to the analysis domain of the HSRT. Students were unable to operationalize knowledge sources to derive accurate diagnoses and did not effectively use reasoning skills. Copyright 2010 Elsevier Inc. All rights reserved.

  18. Effects of critical medium components on the production of antifungal lipopeptides from Bacillus amyloliquefaciens Q-426 exhibiting excellent biosurfactant properties.

    PubMed

    Zhao, Pengchao; Quan, Chunshan; Jin, Liming; Wang, Lina; Wang, Jianhua; Fan, Shengdi

    2013-03-01

    In this study, influence of three critical parameters nitrogen sources, initial pH and metal ions was discussed in the production of antifungal lipopeptides from Bacillus amyloliquefaciens Q-426. The results revealed that lipopeptide biosynthesis might have relations with the population density of strain Q-426 and some special amino acids. Also, the alkali-resistant strain Q-426 could grow well in the presence of Fe(2+) ions below 0.8 M l(-1) and still maintain the competitive advantage below 0.2 M l(-1). Moreover, lipopeptides exhibited significant inhibitory activities against Curvularia lunata (Walk) Boed even at the extreme conditions of temperature, pH and salinity. Finally, biosurfactant properties of lipopeptides mixture were evaluated by use with totally six different methods including bacterial adhesion to hydrocarbons assay, lipase activity, hemolytic activity, emulsification activity, oil displacement test and surface tension measurement. The research suggested that B. amyloliquefaciens Q-426 may have great potential in agricultural and environmental fields.

  19. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D P; Ritts, W D; Wharton, S

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less

  20. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  1. Applications of MICP source for next-generation photomask process

    NASA Astrophysics Data System (ADS)

    Kwon, Hyuk-Joo; Chang, Byung-Soo; Choi, Boo-Yeon; Park, Kyung H.; Jeong, Soo-Hong

    2000-07-01

    As critical dimensions of photomask extends into submicron range, critical dimension uniformity, edge roughness, macro loading effect, and pattern slope become tighter than before. Fabrication of photomask relies on the ability to pattern features with anisotropic profile. To improve critical dimension uniformity, dry etcher is one of the solution and inductively coupled plasma (ICP) sources have become one of promising high density plasma sources for dry etcher. In this paper, we have utilized dry etcher system with multi-pole ICP source for Cr etch and MoSi etch and have investigated critical dimension uniformity, slope, and defects. We will present dry etch process data by process optimization of newly designed dry etcher system. The designed pattern area is 132 by 132 mm2 with 23 by 23 matrix test patterns. 3 (sigma) of critical dimension uniformity is below 12 nm at 0.8 - 3.0 micrometers . In most cases, we can obtain zero defect masks which is operated by face- down loading.

  2. Quantum criticality of a spin-1 XY model with easy-plane single-ion anisotropy via a two-time Green function approach avoiding the Anderson-Callen decoupling

    NASA Astrophysics Data System (ADS)

    Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.

    2016-04-01

    In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in reasonable agreement with those obtained by means of alternative analytical and numerical methods. For d > 2, and in particular for d=3, we determine the finite-temperature critical line ending in the quantum critical point and the related TIM-like shift exponent, consistently with recent renormalization group predictions. The main crossover lines between different asymptotic regimes around the quantum critical point are also estimated providing a global phase diagram and a quantum criticality very similar to the conventional ones.

  3. Continuous excitation chlorophyll fluorescence parameters: a review for practitioners.

    PubMed

    Banks, Jonathan M

    2017-08-01

    This review introduces, defines and critically reviews a number of chlorophyll fluorescence parameters with specific reference to those derived from continuous excitation chlorophyll fluorescence. A number of common issues and criticisms are addressed. The parameters fluorescence origin (F0) and the performance indices (PI) are discussed as examples. This review attempts to unify definitions for the wide range of parameters available for measuring plant vitality, facilitating their calculation and use. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Preliminary Result of Earthquake Source Parameters the Mw 3.4 at 23:22:47 IWST, August 21, 2004, Centre Java, Indonesia Based on MERAMEX Project

    NASA Astrophysics Data System (ADS)

    Laksono, Y. A.; Brotopuspito, K. S.; Suryanto, W.; Widodo; Wardah, R. A.; Rudianto, I.

    2018-03-01

    In order to study the structure subsurface at Merapi Lawu anomaly (MLA) using forward modelling or full waveform inversion, it needs a good earthquake source parameters. The best result source parameter comes from seismogram with high signal to noise ratio (SNR). Beside that the source must be near the MLA location and the stations that used as parameters must be outside from MLA in order to avoid anomaly. At first the seismograms are processed by software SEISAN v10 using a few stations from MERAMEX project. After we found the hypocentre that match the criterion we fine-tuned the source parameters using more stations. Based on seismogram from 21 stations, it is obtained the source parameters as follows: the event is at August, 21 2004, on 23:22:47 Indonesia western standard time (IWST), epicentre coordinate -7.80°S, 101.34°E, hypocentre 47.3 km, dominant frequency f0 = 3.0 Hz, the earthquake magnitude Mw = 3.4.

  5. Sperm quality biomarkers complement reproductive and endocrine parameters in investigating environmental contaminants in common carp (Cyprinus carpio) from the Lake Mead National Recreation Area

    USGS Publications Warehouse

    Jenkins, Jill A.; Rosen, Michael R.; Dale, Rassa O.; Echols, Kathy R.; Torres, Leticia; Wieser, Carla M.; Kersten, Constance A.; Goodbred, Steven L.

    2018-01-01

    Lake Mead National Recreational Area (LMNRA) serves as critical habitat for several federally listed species and supplies water for municipal, domestic, and agricultural use in the Southwestern U.S. Contaminant sources and concentrations vary among the sub-basins within LMNRA. To investigate whether exposure to environmental contaminants is associated with alterations in male common carp (Cyprinus carpio) gamete quality and endocrine- and reproductive parameters, data were collected among sub-basins over 7 years (1999–2006). Endpoints included sperm quality parameters of motility, viability, mitochondrial membrane potential, count, morphology, and DNA fragmentation; plasma components were vitellogenin (VTG), 17ß-estradiol, 11-keto-testosterone, triiodothyronine, and thyroxine. Fish condition factor, gonadosomatic index, and gonadal histology parameters were also measured. Diminished biomarker effects were noted in 2006, and sub-basin differences were indicated by the irregular occurrences of contaminants and by several associations between chemicals (e.g., polychlorinated biphenyls, hexachlorobenzene, galaxolide, and methyl triclosan) and biomarkers (e.g., plasma thyroxine, sperm motility and DNA fragmentation). By 2006, sex steroid hormone and VTG levels decreased with subsequent reduced endocrine disrupting effects. The sperm quality bioassays developed and applied with carp complemented endocrine and reproductive data, and can be adapted for use with other species.

  6. Sperm quality biomarkers complement reproductive and endocrine parameters in investigating environmental contaminants in common carp (Cyprinus carpio) from the Lake Mead National Recreation Area.

    PubMed

    Jenkins, Jill A; Rosen, Michael R; Draugelis-Dale, Rassa O; Echols, Kathy R; Torres, Leticia; Wieser, Carla M; Kersten, Constance A; Goodbred, Steven L

    2018-05-01

    Lake Mead National Recreational Area (LMNRA) serves as critical habitat for several federally listed species and supplies water for municipal, domestic, and agricultural use in the Southwestern U.S. Contaminant sources and concentrations vary among the sub-basins within LMNRA. To investigate whether exposure to environmental contaminants is associated with alterations in male common carp (Cyprinus carpio) gamete quality and endocrine- and reproductive parameters, data were collected among sub-basins over 7 years (1999-2006). Endpoints included sperm quality parameters of motility, viability, mitochondrial membrane potential, count, morphology, and DNA fragmentation; plasma components were vitellogenin (VTG), 17ß-estradiol, 11-keto-testosterone, triiodothyronine, and thyroxine. Fish condition factor, gonadosomatic index, and gonadal histology parameters were also measured. Diminished biomarker effects were noted in 2006, and sub-basin differences were indicated by the irregular occurrences of contaminants and by several associations between chemicals (e.g., polychlorinated biphenyls, hexachlorobenzene, galaxolide, and methyl triclosan) and biomarkers (e.g., plasma thyroxine, sperm motility and DNA fragmentation). By 2006, sex steroid hormone and VTG levels decreased with subsequent reduced endocrine disrupting effects. The sperm quality bioassays developed and applied with carp complemented endocrine and reproductive data, and can be adapted for use with other species. Published by Elsevier Inc.

  7. Dynamical structure of magnetized dissipative accretion flow around black holes

    NASA Astrophysics Data System (ADS)

    Sarkar, Biplob; Das, Santabrata

    2016-09-01

    We study the global structure of optically thin, advection dominated, magnetized accretion flow around black holes. We consider the magnetic field to be turbulent in nature and dominated by the toroidal component. With this, we obtain the complete set of accretion solutions for dissipative flows where bremsstrahlung process is regarded as the dominant cooling mechanism. We show that rotating magnetized accretion flow experiences virtual barrier around black hole due to centrifugal repulsion that can trigger the discontinuous transition of the flow variables in the form of shock waves. We examine the properties of the shock waves and find that the dynamics of the post-shock corona (PSC) is controlled by the flow parameters, namely viscosity, cooling rate and strength of the magnetic field, respectively. We separate the effective region of the parameter space for standing shock and observe that shock can form for wide range of flow parameters. We obtain the critical viscosity parameter that allows global accretion solutions including shocks. We estimate the energy dissipation at the PSC from where a part of the accreting matter can deflect as outflows and jets. We compare the maximum energy that could be extracted from the PSC and the observed radio luminosity values for several supermassive black hole sources and the observational implications of our present analysis are discussed.

  8. Improving tablet coating robustness by selecting critical process parameters from retrospective data.

    PubMed

    Galí, A; García-Montoya, E; Ascaso, M; Pérez-Lozano, P; Ticó, J R; Miñarro, M; Suñé-Negre, J M

    2016-09-01

    Although tablet coating processes are widely used in the pharmaceutical industry, they often lack adequate robustness. Up-scaling can be challenging as minor changes in parameters can lead to varying quality results. To select critical process parameters (CPP) using retrospective data of a commercial product and to establish a design of experiments (DoE) that would improve the robustness of the coating process. A retrospective analysis of data from 36 commercial batches. Batches were selected based on the quality results generated during batch release, some of which revealed quality deviations concerning the appearance of the coated tablets. The product is already marketed and belongs to the portfolio of a multinational pharmaceutical company. The Statgraphics 5.1 software was used for data processing to determine critical process parameters in order to propose new working ranges. This study confirms that it is possible to determine the critical process parameters and create design spaces based on retrospective data of commercial batches. This type of analysis is thus converted into a tool to optimize the robustness of existing processes. Our results show that a design space can be established with minimum investment in experiments, since current commercial batch data are processed statistically.

  9. White LED compared with other light sources: age-dependent photobiological effects and parameters for evaluation.

    PubMed

    Rebec, Katja Malovrh; Klanjšek-Gunde, Marta; Bizjak, Grega; Kobav, Matej B

    2015-01-01

    Ergonomic science at work and living places should appraise human factors concerning the photobiological effects of lighting. Thorough knowledge on this subject has been gained in the past; however, few attempts have been made to propose suitable evaluation parameters. The blue light hazard and its influence on melatonin secretion in age-dependent observers is considered in this paper and parameters for its evaluation are proposed. New parameters were applied to analyse the effects of white light-emitting diode (LED) light sources and to compare them with the currently applied light sources. The photobiological effects of light sources with the same illuminance but different spectral power distribution were determined for healthy 4-76-year-old observers. The suitability of new parameters is discussed. Correlated colour temperature, the only parameter currently used to assess photobiological effects, is evaluated and compared to new parameters.

  10. Multivariate analysis of ATR-FTIR spectra for assessment of oil shale organic geochemical properties

    USGS Publications Warehouse

    Washburn, Kathryn E.; Birdwell, Justin E.

    2013-01-01

    In this study, attenuated total reflectance (ATR) Fourier transform infrared spectroscopy (FTIR) was coupled with partial least squares regression (PLSR) analysis to relate spectral data to parameters from total organic carbon (TOC) analysis and programmed pyrolysis to assess the feasibility of developing predictive models to estimate important organic geochemical parameters. The advantage of ATR-FTIR over traditional analytical methods is that source rocks can be analyzed in the laboratory or field in seconds, facilitating more rapid and thorough screening than would be possible using other tools. ATR-FTIR spectra, TOC concentrations and Rock–Eval parameters were measured for a set of oil shales from deposits around the world and several pyrolyzed oil shale samples. PLSR models were developed to predict the measured geochemical parameters from infrared spectra. Application of the resulting models to a set of test spectra excluded from the training set generated accurate predictions of TOC and most Rock–Eval parameters. The critical region of the infrared spectrum for assessing S1, S2, Hydrogen Index and TOC consisted of aliphatic organic moieties (2800–3000 cm−1) and the models generated a better correlation with measured values of TOC and S2 than did integrated aliphatic peak areas. The results suggest that combining ATR-FTIR with PLSR is a reliable approach for estimating useful geochemical parameters of oil shales that is faster and requires less sample preparation than current screening methods.

  11. Order parameter fluctuations at a buried quantum critical point

    PubMed Central

    Feng, Yejun; Wang, Jiyang; Jaramillo, R.; van Wezel, Jasper; Haravifard, S.; Srajer, G.; Liu, Y.; Xu, Z.-A.; Littlewood, P. B.; Rosenbaum, T. F.

    2012-01-01

    Quantum criticality is a central concept in condensed matter physics, but the direct observation of quantum critical fluctuations has remained elusive. Here we present an X-ray diffraction study of the charge density wave (CDW) in 2H-NbSe2 at high pressure and low temperature, where we observe a broad regime of order parameter fluctuations that are controlled by proximity to a quantum critical point. X-rays can track the CDW despite the fact that the quantum critical regime is shrouded inside a superconducting phase; and in contrast to transport probes, allow direct measurement of the critical fluctuations of the charge order. Concurrent measurements of the crystal lattice point to a critical transition that is continuous in nature. Our results confirm the long-standing expectations of enhanced quantum fluctuations in low-dimensional systems, and may help to constrain theories of the quantum critical Fermi surface. PMID:22529348

  12. Data on the descriptive overview and the quality assessment details of 12 qualitative research papers.

    PubMed

    Barnabishvili, Maia; Ulrichs, Timo; Waldherr, Ruth

    2016-09-01

    This data article presents the supplementary material for the review paper "Role of acceptability barriers in delayed diagnosis of Tuberculosis: Literature review from high burden countries" (Barnabishvili et al., in press) [1]. General overview of 12 qualitative papers, including the details about authors, years of publication, data source locations, study objectives, overview of methods, study population characteristics, as well as the details of intervention and the outcome parameters of the papers are summarized in the first two tables included to the article. Quality assessment process of the methodological strength of 12 papers and the results of the critical appraisal are further described and summarized in the second part of the article.

  13. Totally Integrated Munitions Enterprise ''Affordable Munitions Production for the 21st Century''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burleson, R.R.; Poggio, M.E.; Rosenberg, S.J.

    2000-09-13

    The U.S. Army faces several munitions manufacturing issues: downsizing of the organic production base, timely fielding of affordable smart munitions, and munitions replenishment during national emergencies. Totally Integrated Munitions Enterprise (TIME) is addressing these complex issues via the development and demonstration of an integrated enterprise. The enterprise will include the tools, network, and open modular architecture controllers to enable accelerated acquisition, shortened concept to volume production, lower life cycle costs, capture of critical manufacturing processes, and communication of process parameters between remote sites to rapidly spin-off production for replenishment by commercial sources. TIME addresses the enterprise as a system, integratingmore » design, engineering, manufacturing, administration, and logistics.« less

  14. Totally Integrated Munitions Enterprise ''Affordable Munitions Production for the 21st Century''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burleson, R.R.; Poggio, M.E.; Rosenberg, S.J.

    2000-07-14

    The U.S. Army faces several munitions manufacturing issues: downsizing of the organic production base, timely fielding of affordable smart munitions, and munitions replenishment during national emergencies. TIME is addressing these complex issues via the development and demonstration of an integrated enterprise. The enterprise will include the tools, network, and open modular architecture controller to enable accelerated acquisition, shortened concept to volume production, lower life cycle costs, capture of critical manufacturing processes, and communication of process parameters between remote sites to rapidly spin-off production for replenishment by commercial sources. TIME addresses the enterprise as a system, integrating design, engineering, manufacturing, administration,more » and logistics.« less

  15. Totally Integrated Munitions Enterprise ''Affordable Munitions Production for the 21st Century''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burleson, R.R.; Poggio, M.E.; Rosenberg, S.J.

    2000-08-18

    The U.S. Army faces several munitions manufacturing issues: downsizing of the organic production base, timely fielding of affordable smart munitions, and munitions replenishment during national emergencies. Totally Integrated Munitions Enterprise (TIME) is addressing these complex issues via the development and demonstration of an integrated enterprise. The enterprise will include the tools, network, and open modular architecture controllers to enable accelerated acquisition, shortened concept to volume production, lower life cycle costs, capture of critical manufacturing processes, and communication of process parameters between remote sites to rapidly spin-off production for replenishment by commercial sources. TIME addresses the enterprise as a system, integratingmore » design, engineering, manufacturing, administration, and logistics.« less

  16. Practical aspects of modern interferometry for optical manufacturing quality control: Part 2

    NASA Astrophysics Data System (ADS)

    Smythe, Robert

    2012-07-01

    Modern phase shifting interferometers enable the manufacture of optical systems that drive the global economy. Semiconductor chips, solid-state cameras, cell phone cameras, infrared imaging systems, space based satellite imaging and DVD and Blu-Ray disks are all enabled by phase shifting interferometers. Theoretical treatments of data analysis and instrument design advance the technology but often are not helpful towards the practical use of interferometers. An understanding of the parameters that drive system performance is critical to produce useful results. Any interferometer will produce a data map and results; this paper, in three parts, reviews some of the key issues to minimize error sources in that data and provide a valid measurement.

  17. Practical aspects of modern interferometry for optical manufacturing quality control, Part 3

    NASA Astrophysics Data System (ADS)

    Smythe, Robert A.

    2012-09-01

    Modern phase shifting interferometers enable the manufacture of optical systems that drive the global economy. Semiconductor chips, solid-state cameras, cell phone cameras, infrared imaging systems, space-based satellite imaging, and DVD and Blu-Ray disks are all enabled by phase-shifting interferometers. Theoretical treatments of data analysis and instrument design advance the technology but often are not helpful toward the practical use of interferometers. An understanding of the parameters that drive the system performance is critical to produce useful results. Any interferometer will produce a data map and results; this paper, in three parts, reviews some of the key issues to minimize error sources in that data and provide a valid measurement.

  18. Downstream processing from melt granulation towards tablets: In-depth analysis of a continuous twin-screw melt granulation process using polymeric binders.

    PubMed

    Grymonpré, W; Verstraete, G; Vanhoorne, V; Remon, J P; De Beer, T; Vervaet, C

    2018-03-01

    The concept of twin-screw melt granulation (TSMG) has steadily (re)-gained interest in pharmaceutical formulation development as an intermediate step during tablet manufacturing. However, to be considered as a viable processing option for solid oral dosage forms there is a need to understand all critical sources of variability which could affect this granulation technique. The purpose of this study was to provide an in-depth analysis of the continuous TSMG process in order to expose the critical process parameters (CPP) and elucidate the impact of process and formulation parameters on the critical quality attributes (CQA) of granules and tablets during continuous TSMG. A first part of the study dealt with the screening of various amorphous polymers as binder for producing high-dosed melt granules of two model drug (i.e. acetaminophen and hydrochlorothiazide). The second part of this study described a quality-by-design (QbD) approach for melt granulation of hydrochlorothiazide in order to thoroughly evaluate TSMG, milling and tableting stage of the continuous TSMG line. Using amorphous polymeric binders resulted in melt granules with high milling efficiency due to their brittle behaviour without producing excessive amounts of fines, providing high granule yields with low friability. Therefore, it makes them extremely suitable for further downstream processing. One of the most important CPP during TSMG with polymeric binders was the granulation-torque, which - in case of polymers with high T g - increased during longer granulation runs to critical levels endangering the continuous process flow. However, by optimizing both screw speed and throughput or changing to polymeric binders with lower T g it was possible to significantly reduce this risk. This research paper highlighted that TSMG must be considered as a viable option during formulation development of solid oral dosage forms based on the robustness of the CQA of both melt granules and tablets. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Engineering applications of strong ground motion simulation

    NASA Astrophysics Data System (ADS)

    Somerville, Paul

    1993-02-01

    The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.

  20. Nonequilibrium critical dynamics of the two-dimensional Ashkin-Teller model at the Baxter line

    NASA Astrophysics Data System (ADS)

    Fernandes, H. A.; da Silva, R.; Caparica, A. A.; de Felício, J. R. Drugowich

    2017-04-01

    We investigate the short-time universal behavior of the two-dimensional Ashkin-Teller model at the Baxter line by performing time-dependent Monte Carlo simulations. First, as preparatory results, we obtain the critical parameters by searching the optimal power-law decay of the magnetization. Thus, the dynamic critical exponents θm and θp, related to the magnetic and electric order parameters, as well as the persistence exponent θg, are estimated using heat-bath Monte Carlo simulations. In addition, we estimate the dynamic exponent z and the static critical exponents β and ν for both order parameters. We propose a refined method to estimate the static exponents that considers two different averages: one that combines an internal average using several seeds with another, which is taken over temporal variations in the power laws. Moreover, we also performed the bootstrapping method for a complementary analysis. Our results show that the ratio β /ν exhibits universal behavior along the critical line corroborating the conjecture for both magnetization and polarization.

  1. Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.

    2014-12-01

    Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.

  2. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  3. Comparison of Clustering Techniques for Residential Energy Behavior using Smart Meter Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ling; Lee, Doris; Sim, Alex

    Current practice in whole time series clustering of residential meter data focuses on aggregated or subsampled load data at the customer level, which ignores day-to-day differences within customers. This information is critical to determine each customer’s suitability to various demand side management strategies that support intelligent power grids and smart energy management. Clustering daily load shapes provides fine-grained information on customer attributes and sources of variation for subsequent models and customer segmentation. In this paper, we apply 11 clustering methods to daily residential meter data. We evaluate their parameter settings and suitability based on 6 generic performance metrics and post-checkingmore » of resulting clusters. Finally, we recommend suitable techniques and parameters based on the goal of discovering diverse daily load patterns among residential customers. To the authors’ knowledge, this paper is the first robust comparative review of clustering techniques applied to daily residential load shape time series in the power systems’ literature.« less

  4. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-07-15

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper's capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determinedmore » for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency ({eta} {approx} 20%) and dark count probability (p{sub dark} {approx} 10{sup -7})« less

  5. Material and energy recovery in integrated waste management system--an Italian case study on the quality of MSW data.

    PubMed

    Bianchini, A; Pellegrini, M; Saccani, C

    2011-01-01

    This paper analyses the way numerical data on Municipal Solid Waste (MSW) quantities are recorded, processed and then reported for six of the most meaningful Italian Districts and shows the difficulties found during the comparison of these Districts, starting from the lack of homogeneity and the fragmentation of the data indispensable to make this critical analysis. These aspects are often ignored, but data certainty are the basis for serious MSW planning. In particular, the paper focuses on overall Source Separation Level (SSL) definition and on the influence that Special Waste (SW) assimilated to MSW has on it. An investigation was then necessary to identify new parameters in place of overall SSL. Moreover, these parameters are not only important for a waste management system performance measure, but are fundamental in order to design and check management plan and to identify possible actions to improve it. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. An information propagation model considering incomplete reading behavior in microblog

    NASA Astrophysics Data System (ADS)

    Su, Qiang; Huang, Jiajia; Zhao, Xiande

    2015-02-01

    Microblog is one of the most popular communication channels on the Internet, and has already become the third largest source of news and public opinions in China. Although researchers have studied the information propagation in microblog using the epidemic models, previous studies have not considered the incomplete reading behavior among microblog users. Therefore, the model cannot fit the real situations well. In this paper, we proposed an improved model entitled Microblog-Susceptible-Infected-Removed (Mb-SIR) for information propagation by explicitly considering the user's incomplete reading behavior. We also tested the effectiveness of the model using real data from Sina Microblog. We demonstrate that the new proposed model is more accurate in describing the information propagation in microblog. In addition, we also investigate the effects of the critical model parameters, e.g., reading rate, spreading rate, and removed rate through numerical simulations. The simulation results show that, compared with other parameters, reading rate plays the most influential role in the information propagation performance in microblog.

  7. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2008-07-01

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper’s capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determined for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency (η ≈ 20%) and dark count probability ( p dark ˜ 10-7).

  8. Material and energy recovery in integrated waste management system - An Italian case study on the quality of MSW data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bianchini, A.; Pellegrini, M.; Saccani, C., E-mail: cesare.saccani@unibo.it

    2011-09-15

    This paper analyses the way numerical data on Municipal Solid Waste (MSW) quantities are recorded, processed and then reported for six of the most meaningful Italian Districts and shows the difficulties found during the comparison of these Districts, starting from the lack of homogeneity and the fragmentation of the data indispensable to make this critical analysis. These aspects are often ignored, but data certainty are the basis for serious MSW planning. In particular, the paper focuses on overall Source Separation Level (SSL) definition and on the influence that Special Waste (SW) assimilated to MSW has on it. An investigation wasmore » then necessary to identify new parameters in place of overall SSL. Moreover, these parameters are not only important for a waste management system performance measure, but are fundamental in order to design and check management plan and to identify possible actions to improve it.« less

  9. Stormwater Pollutant Control from Critical Source Areas

    EPA Science Inventory

    Critical source areas include: vehicular maintenance facilities, parking lots and bus terminals, junk and lumber yards, industrial storage facilities, loading docks and refueling areas, manufacturing sites, etc. Addressing pollutant runoff from these areas is an important compon...

  10. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  11. Total grain-size distribution of four subplinian-Plinian tephras from Hekla volcano, Iceland: Implications for sedimentation dynamics and eruption source parameters

    NASA Astrophysics Data System (ADS)

    Janebo, Maria H.; Houghton, Bruce F.; Thordarson, Thorvaldur; Bonadonna, Costanza; Carey, Rebecca J.

    2018-05-01

    The size distribution of the population of particles injected into the atmosphere during a volcanic explosive eruption, i.e., the total grain-size distribution (TGSD), can provide important insights into fragmentation efficiency and is a fundamental source parameter for models of tephra dispersal and sedimentation. Recent volcanic crisis (e.g. Eyjafjallajökull 2010, Iceland and Córdon Caulle 2011, Chile) and the ensuing economic losses, highlighted the need for a better constraint of eruption source parameters to be used in real-time forecasting of ash dispersal (e.g., mass eruption rate, plume height, particle features), with a special focus on the scarcity of published TGSD in the scientific literature. Here we present TGSD data associated with Hekla volcano, which has been very active in the last few thousands of years and is located on critical aviation routes. In particular, we have reconstructed the TGSD of the initial subplinian-Plinian phases of four historical eruptions, covering a range of magma composition (andesite to rhyolite), eruption intensity (VEI 4 to 5), and erupted volume (0.2 to 1 km3). All four eruptions have bimodal TGSDs with mass fraction of fine ash (<63 μm; m63) from 0.11 to 0.25. The two Plinian dacitic-rhyolitic Hekla deposits have higher abundances of fine ash, and hence larger m63 values, than their andesitic subplinian equivalents, probably a function of more intense and efficient primary fragmentation. Due to differences in plume height, this contrast is not seen in samples from individual sites, especially in the near field, where lapilli have a wider spatial coverage in the Plinian deposits. The distribution of pyroclast sizes in Plinian versus subplinian falls reflects competing influences of more efficient fragmentation (e.g., producing larger amounts of fine ash) versus more efficient particle transport related to higher and more vigorous plumes, displacing relatively coarse lapilli farther down the dispersal axis.

  12. The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio

    NASA Astrophysics Data System (ADS)

    Roquier, Gerard

    2017-06-01

    The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.

  13. Optimal pupil design for confocal microscopy

    NASA Astrophysics Data System (ADS)

    Patel, Yogesh G.; Rajadhyaksha, Milind; DiMarzio, Charles A.

    2010-02-01

    Confocal reflectance microscopy may enable screening and diagnosis of skin cancers noninvasively and in real-time, as an adjunct to biopsy and pathology. Current instruments are large, complex, and expensive. A simpler, confocal line-scanning microscope may accelerate the translation of confocal microscopy in clinical and surgical dermatology. A confocal reflectance microscope may use a beamsplitter, transmitting and detecting through the pupil, or a divided pupil, or theta configuration, with half used for transmission and half for detection. The divided pupil may offer better sectioning and contrast. We present a Fourier optics model and compare the on-axis irradiance of a confocal point-scanning microscope in both pupil configurations, optimizing the profile of a Gaussian beam in a circular or semicircular aperture. We repeat both calculations with a cylindrical lens which focuses the source to a line. The variable parameter is the fillfactor, h, the ratio of the 1/e2 diameter of the Gaussian beam to the diameter of the full aperture. The optimal values of h, for point scanning are 0.90 (full) and 0.66 for the half-aperture. For line-scanning, the fill-factors are 1.02 (full) and 0.52 (half). Additional parameters to consider are the optimal location of the point-source beam in the divided-pupil configuration, the optimal line width for the line-source, and the width of the aperture in the divided-pupil configuration. Additional figures of merit are field-of-view and sectioning. Use of optimal designs is critical in comparing the experimental performance of the different configurations.

  14. Identification of land use and other anthropogenic impacts on nitrogen cycling using stable isotopes and distributed hydrologic modeling

    NASA Astrophysics Data System (ADS)

    O'Connell, M. T.; Macko, S. A.

    2017-12-01

    Reactive modeling of sources and processes affecting the concentration of NO3- and NH4+ in natural and anthropogenically influenced surface water can reveal unexpected characteristics of the systems. A distributed hydrologic model, TREX, is presented that provides opportunities to study multiscale effects of nitrogen inputs, outputs, and changes. The model is adapted to run on parallel computing architecture and includes the geochemical reaction module PhreeqcRM, which enables calculation of δ15N and δ18O from biologically mediated transformation reactions in addition to mixing and equilibration. Management practices intended to attenuate nitrate in surface and subsurface waters, in particular the establishment of riparian buffer zones, are variably effective due to spatial heterogeneity of soils and preferential flow through buffers. Accounting for this heterogeneity in a fully distributed biogeochemical model allows for more efficient planning and management practices. Highly sensitive areas within a watershed can be identified based on a number of spatially variable parameters, and by varying those parameters systematically to determine conditions under which those areas are under more or less critical stress. Responses can be predicted at various scales to stimuli ranging from local changes in cropping regimes to global shifts in climate. This work presents simulations of conditions showing low antecedent nitrogen retention versus significant contribution of old nitrate. Nitrogen sources are partitioned using dual isotope ratios and temporally varying concentrations. In these two scenarios, we can evaluate the efficiency of source identification based on spatially explicit information, and model effects of increasing urban land use on N biogeochemical cycling.

  15. Economic impact of Tegaderm chlorhexidine gluconate (CHG) dressing in critically ill patients.

    PubMed

    Thokala, Praveen; Arrowsmith, Martin; Poku, Edith; Martyn-St James, Marissa; Anderson, Jeff; Foster, Steve; Elliott, Tom; Whitehouse, Tony

    2016-09-01

    To estimate the economic impact of a Tegaderm TM chlorhexidine gluconate (CHG) gel dressing compared with a standard intravenous (i.v.) dressing (defined as non-antimicrobial transparent film dressing), used for insertion site care of short-term central venous and arterial catheters (intravascular catheters) in adult critical care patients using a cost-consequence model populated with data from published sources. A decision analytical cost-consequence model was developed which assigned each patient with an indwelling intravascular catheter and a standard dressing, a baseline risk of associated dermatitis, local infection at the catheter insertion site and catheter-related bloodstream infections (CRBSI), estimated from published secondary sources. The risks of these events for patients with a Tegaderm CHG were estimated by applying the effectiveness parameters from the clinical review to the baseline risks. Costs were accrued through costs of intervention (i.e. Tegaderm CHG or standard intravenous dressing) and hospital treatment costs depended on whether the patients had local dermatitis, local infection or CRBSI. Total costs were estimated as mean values of 10,000 probabilistic sensitivity analysis (PSA) runs. Tegaderm CHG resulted in an average cost-saving of £77 per patient in an intensive care unit. Tegaderm CHG also has a 98.5% probability of being cost-saving compared to standard i.v. dressings. The analyses suggest that Tegaderm CHG is a cost-saving strategy to reduce CRBSI and the results were robust to sensitivity analyses.

  16. Ultra-compact swept-source optical coherence tomography handheld probe with motorized focus adjustment (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    LaRocca, Francesco; Nankivil, Derek; Keller, Brenton; Farsiu, Sina; Izatt, Joseph A.

    2017-02-01

    Handheld optical coherence tomography (OCT) systems facilitate imaging of young children, bedridden subjects, and those with less stable fixation. Smaller and lighter OCT probes allow for more efficient imaging and reduced operator fatigue, which is critical for prolonged use in either the operating room or neonatal intensive care unit. In addition to size and weight, the imaging speed, image quality, field of view, resolution, and focus correction capability are critical parameters that determine the clinical utility of a handheld probe. Here, we describe an ultra-compact swept source (SS) OCT handheld probe weighing only 211 g (half the weight of the next lightest handheld SSOCT probe in the literature) with 20.1 µm lateral resolution, 7 µm axial resolution, 102 dB peak sensitivity, a 27° x 23° field of view, and motorized focus adjustment for refraction correction between -10 to +16 D. A 2D microelectromechanical systems (MEMS) scanner, a converging beam-at-scanner telescope configuration, and an optical design employing 6 different custom optics were used to minimize device size and weight while achieving diffraction limited performance throughout the system's field of view. Custom graphics processing unit (GPU)-accelerated software was used to provide real-time display of OCT B-scans and volumes. Retinal images were acquired from adult volunteers to demonstrate imaging performance.

  17. Earthquake nucleation on faults with rate-and state-dependent strength

    USGS Publications Warehouse

    Dieterich, J.H.

    1992-01-01

    Dieterich, J.H., 1992. Earthquake nucleation on faults with rate- and state-dependent strength. In: T. Mikumo, K. Aki, M. Ohnaka, L.J. Ruff and P.K.P. Spudich (Editors), Earthquake Source Physics and Earthquake Precursors. Tectonophysics, 211: 115-134. Faults with rate- and state-dependent constitutive properties reproduce a range of observed fault slip phenomena including spontaneous nucleation of slip instabilities at stresses above some critical stress level and recovery of strength following slip instability. Calculations with a plane-strain fault model with spatially varying properties demonstrate that accelerating slip precedes instability and becomes localized to a fault patch. The dimensions of the fault patch follow scaling relations for the minimum critical length for unstable fault slip. The critical length is a function of normal stress, loading conditions and constitutive parameters which include Dc, the characteristic slip distance. If slip starts on a patch that exceeds the critical size, the length of the rapidly accelerating zone tends to shrink to the characteristic size as the time of instability approaches. Solutions have been obtained for a uniform, fixed-patch model that are in good agreement with results from the plane-strain model. Over a wide range of conditions, above the steady-state stress, the logarithm of the time to instability linearly decreases as the initial stress increases. Because nucleation patch length and premonitory displacement are proportional to Dc, the moment of premonitory slip scales by D3c. The scaling of Dc is currently an open question. Unless Dc for earthquake faults is significantly greater than that observed on laboratory faults, premonitory strain arising from the nucleation process for earthquakes may by too small to detect using current observation methods. Excluding the possibility that Dc in the nucleation zone controls the magnitude of the subsequent earthquake, then the source dimensions of the smallest earthquakes in a region provide an upper limit for the size of the nucleation patch. ?? 1992.

  18. THE SDSS-III APOGEE SPECTRAL LINE LIST FOR H-BAND SPECTROSCOPY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shetrone, M.; Bizyaev, D.; Chojnowski, D.

    We present the H-band spectral line lists adopted by the Apache Point Observatory Galactic Evolution Experiment (APOGEE). The APOGEE line lists comprise astrophysical, theoretical, and laboratory sources from the literature, as well as newly evaluated astrophysical oscillator strengths and damping parameters. We discuss the construction of the APOGEE line list, which is one of the critical inputs for the APOGEE Stellar Parameters and Chemical Abundances Pipeline, and present three different versions that have been used at various stages of the project. The methodology for the newly calculated astrophysical line lists is reviewed. The largest of these three line lists containsmore » 134,457 molecular and atomic transitions. In addition to the format adopted to store the data, the line lists are available in MOOG, Synspec, and Turbospectrum formats. The limitations of the line lists along with guidance for its use on different spectral types are discussed. We also present a list of H-band spectral features that are either poorly represented or completely missing in our line list. This list is based on the average of a large number of spectral fit residuals for APOGEE observations spanning a wide range of stellar parameters.« less

  19. Correspondence between discrete and continuous models of excitable media: trigger waves

    NASA Technical Reports Server (NTRS)

    Chernyak, Y. B.; Feldman, A. B.; Cohen, R. J.

    1997-01-01

    We present a theoretical framework for relating continuous partial differential equation (PDE) models of excitable media to discrete cellular automata (CA) models on a randomized lattice. These relations establish a quantitative link between the CA model and the specific physical system under study. We derive expressions for the CA model's plane wave speed, critical curvature, and effective diffusion constant in terms of the model's internal parameters (the interaction radius, excitation threshold, and time step). We then equate these expressions to the corresponding quantities obtained from solution of the PDEs (for a fixed excitability). This yields a set of coupled equations with a unique solution for the required CA parameter values. Here we restrict our analysis to "trigger" wave solutions obtained in the limiting case of a two-dimensional excitable medium with no recovery processes. We tested the correspondence between our CA model and two PDE models (the FitzHugh-Nagumo medium and a medium with a "sawtooth" nonlinear reaction source) and found good agreement with the numerical solutions of the PDEs. Our results suggest that the behavior of trigger waves is actually controlled by a small number of parameters.

  20. A Novel Series Connected Batteries State of High Voltage Safety Monitor System for Electric Vehicle Application

    PubMed Central

    Jiaxi, Qiang; Lin, Yang; Jianhui, He; Qisheng, Zhou

    2013-01-01

    Batteries, as the main or assistant power source of EV (Electric Vehicle), are usually connected in series with high voltage to improve the drivability and energy efficiency. Today, more and more batteries are connected in series with high voltage, if there is any fault in high voltage system (HVS), the consequence is serious and dangerous. Therefore, it is necessary to monitor the electric parameters of HVS to ensure the high voltage safety and protect personal safety. In this study, a high voltage safety monitor system is developed to solve this critical issue. Four key electric parameters including precharge, contact resistance, insulation resistance, and remaining capacity are monitored and analyzed based on the equivalent models presented in this study. The high voltage safety controller which integrates the equivalent models and control strategy is developed. By the help of hardware-in-loop system, the equivalent models integrated in the high voltage safety controller are validated, and the online electric parameters monitor strategy is analyzed and discussed. The test results indicate that the high voltage safety monitor system designed in this paper is suitable for EV application. PMID:24194677

  1. A novel series connected batteries state of high voltage safety monitor system for electric vehicle application.

    PubMed

    Jiaxi, Qiang; Lin, Yang; Jianhui, He; Qisheng, Zhou

    2013-01-01

    Batteries, as the main or assistant power source of EV (Electric Vehicle), are usually connected in series with high voltage to improve the drivability and energy efficiency. Today, more and more batteries are connected in series with high voltage, if there is any fault in high voltage system (HVS), the consequence is serious and dangerous. Therefore, it is necessary to monitor the electric parameters of HVS to ensure the high voltage safety and protect personal safety. In this study, a high voltage safety monitor system is developed to solve this critical issue. Four key electric parameters including precharge, contact resistance, insulation resistance, and remaining capacity are monitored and analyzed based on the equivalent models presented in this study. The high voltage safety controller which integrates the equivalent models and control strategy is developed. By the help of hardware-in-loop system, the equivalent models integrated in the high voltage safety controller are validated, and the online electric parameters monitor strategy is analyzed and discussed. The test results indicate that the high voltage safety monitor system designed in this paper is suitable for EV application.

  2. Analysis of ground-motion simulation big data

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Fujiwara, H.

    2016-12-01

    We developed a parallel distributed processing system which applies a big data analysis to the large-scale ground motion simulation data. The system uses ground-motion index values and earthquake scenario parameters as input. We used peak ground velocity value and velocity response spectra as the ground-motion index. The ground-motion index values are calculated from our simulation data. We used simulated long-period ground motion waveforms at about 80,000 meshes calculated by a three dimensional finite difference method based on 369 earthquake scenarios of a great earthquake in the Nankai Trough. These scenarios were constructed by considering the uncertainty of source model parameters such as source area, rupture starting point, asperity location, rupture velocity, fmax and slip function. We used these parameters as the earthquake scenario parameter. The system firstly carries out the clustering of the earthquake scenario in each mesh by the k-means method. The number of clusters is determined in advance using a hierarchical clustering by the Ward's method. The scenario clustering results are converted to the 1-D feature vector. The dimension of the feature vector is the number of scenario combination. If two scenarios belong to the same cluster the component of the feature vector is 1, and otherwise the component is 0. The feature vector shows a `response' of mesh to the assumed earthquake scenario group. Next, the system performs the clustering of the mesh by k-means method using the feature vector of each mesh previously obtained. Here the number of clusters is arbitrarily given. The clustering of scenarios and meshes are performed by parallel distributed processing with Hadoop and Spark, respectively. In this study, we divided the meshes into 20 clusters. The meshes in each cluster are geometrically concentrated. Thus this system can extract regions, in which the meshes have similar `response', as clusters. For each cluster, it is possible to determine particular scenario parameters which characterize the cluster. In other word, by utilizing this system, we can obtain critical scenario parameters of the ground-motion simulation for each evaluation point objectively. This research was supported by CREST, JST.

  3. Adiabatic Coupling Constant of Nitrobenzene- n-Alkane Critical Mixtures. Evidence from Ultrasonic Spectra and Thermodynamic Data

    NASA Astrophysics Data System (ADS)

    Mirzaev, Sirojiddin Z.; Kaatze, Udo

    2016-09-01

    Ultrasonic spectra of mixtures of nitrobenzene with n-alkanes, from n-hexane to n-nonane, are analyzed. They feature up to two Debye-type relaxation terms with discrete relaxation times and, near the critical point, an additional relaxation term due to the fluctuations in the local concentration. The latter can be well represented by the dynamic scaling theory. Its amplitude parameter reveals the adiabatic coupling constant of the mixtures of critical composition. The dependence of this thermodynamic parameter upon the length of the n-alkanes corresponds to that of the slope in the pressure dependence of the critical temperature and is thus taken another confirmation of the dynamic scaling model. The change in the variation of the coupling constant and of several other mixture parameters with alkane length probably reflects a structural change in the nitrobenzene- n-alkane mixtures when the number of carbon atoms per alkane exceeds eight.

  4. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  5. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    DOE PAGES

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; ...

    2018-04-18

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  6. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, B.; et al.

    2017-08-04

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  7. Stripline split-ring resonator with integrated optogalvanic sample cell

    NASA Astrophysics Data System (ADS)

    Persson, Anders; Berglund, Martin; Thornell, Greger; Possnert, Göran; Salehpour, Mehran

    2014-04-01

    Intracavity optogalvanic spectroscopy (ICOGS) has been proposed as a method for unambiguous detection of rare isotopes. Of particular interest is 14C, where detection of extremely low concentrations in the 1:1015 range (14C: 12C), is of interest in, e.g., radiocarbon dating and pharmaceutical sciences. However, recent reports show that ICOGS suffers from substantial problems with reproducibility. To qualify ICOGS as an analytical method, more stable and reliable plasma generation and signal detection are needed. In our proposed setup, critical parameters have been improved. We have utilized a stripline split-ring resonator microwave-induced microplasma source to excite and sustain the plasma. Such a microplasma source offers several advantages over conventional ICOGS plasma sources. For example, the stripline split-ring resonator concept employs separated plasma generation and signal detection, which enables sensitive detection at stable plasma conditions. The concept also permits in situ observation of the discharge conditions, which was found to improve reproducibility. Unique to the stripline split-ring resonator microplasma source in this study, is that the optogalvanic sample cell has been embedded in the device itself. This integration enables improved temperature control and more stable and accurate signal detection. Significant improvements are demonstrated, including reproducibility, signal-to-noise ratio, and precision.

  8. 3-D Simulation of Earthquakes on the Cascadia Megathrust: Key Parameters and Constraints from Offshore Structure and Seismicity

    NASA Astrophysics Data System (ADS)

    Wirth, E. A.; Frankel, A. D.; Vidale, J. E.; Stone, I.; Nasser, M.; Stephenson, W. J.

    2017-12-01

    The Cascadia subduction zone has a long history of M8 to M9 earthquakes, inferred from coastal subsidence, tsunami records, and submarine landslides. These megathrust earthquakes occur mostly offshore, and an improved characterization of the megathrust is critical for accurate seismic hazard assessment in the Pacific Northwest. We run numerical simulations of 50 magnitude 9 earthquake rupture scenarios on the Cascadia megathrust, using a 3-D velocity model based on geologic constraints and regional seismicity, as well as active and passive source seismic studies. We identify key parameters that control the intensity of ground shaking and resulting seismic hazard. Variations in the down-dip limit of rupture (e.g., extending rupture to the top of the non-volcanic tremor zone, compared to a completely offshore rupture) result in a 2-3x difference in peak ground acceleration (PGA) for the inland city of Seattle, Washington. Comparisons of our simulations to paleoseismic data suggest that rupture extending to the 1 cm/yr locking contour (i.e., mostly offshore) provides the best fit to estimates of coastal subsidence during previous Cascadia earthquakes, but further constraints on the down-dip limit from microseismicity, offshore geodetics, and paleoseismic evidence are needed. Similarly, our simulations demonstrate that coastal communities experience a four-fold increase in PGA depending upon their proximity to strong-motion-generating areas (i.e., high strength asperities) on the deeper portions of the megathrust. An improved understanding of the structure and rheology of the plate interface and accretionary wedge, and better detection of offshore seismicity, may allow us to forecast locations of these asperities during a future Cascadia earthquake. In addition to these parameters, the seismic velocity and attenuation structure offshore also strongly affects the resulting ground shaking. This work outlines the range of plausible ground motions from an M9 Cascadia earthquake, and highlights the importance of offshore studies for constraining critical parameters and seismic hazard in the Pacific Northwest.

  9. Modeling and analysis of sub-surface leakage current in nano-MOSFET under cutoff regime

    NASA Astrophysics Data System (ADS)

    Swami, Yashu; Rai, Sanjeev

    2017-02-01

    The high leakage current in nano-meter regimes is becoming a significant portion of power dissipation in nano-MOSFET circuits as threshold voltage, channel length, and gate oxide thickness are scaled down to nano-meter range. Precise leakage current valuation and meticulous modeling of the same at nano-meter technology scale is an increasingly a critical work in designing the low power nano-MOSFET circuits. We present a specific compact model for sub-threshold regime leakage current in bulk driven nano-MOSFETs. The proposed logical model is instigated and executed into the latest updated PTM bulk nano-MOSFET model and is found to be in decent accord with technology-CAD simulation data. This paper also reviews various transistor intrinsic leakage mechanisms for nano-MOSFET exclusively in weak inversion, like drain-induced barricade lowering (DIBL), gate-induced drain leakage (GIDL), gate oxide tunneling (GOT) leakage etc. The root cause of the sub-surface leakage current is mainly due to the nano-scale short channel length causing source-drain coupling even in sub-threshold domain. Consequences leading to carriers triumphing the barricade between the source and drain. The enhanced model effectively considers the following parameter dependence in the account for better-quality value-added results like drain-to-source bias (VDS), gate-to-source bias (VGS), channel length (LG), source/drain junction depth (Xj), bulk doping concentration (NBULK), and operating temperature (Top).

  10. Prognostic value of severity by various visceral proteins in critically ill patients with SIRS during 7 days of stay.

    PubMed

    Bouharras-El Idrissi, Hicham; Molina-López, Jorge; Herrera-Quintana, Lourdes; Domínguez-García, Álvaro; Lobo-Támer, Gabriela; Pérez-Moreno, Irene; Pérez-de la Cruz, Antonio; Planells-Del Pozo, Elena

    2016-11-29

    Critically ill patients typically develop a catabolic stress state as a result of a systemic inflammatory response (SIRS) that alters clinical-nutritional biomarkers, increasing energy demands and nutritional requirements. To evaluate the status of albumin, prealbumin and transferrin in critically ill patients and the association between these clinical-nutritional parameters with the severity during a seven day stay in intensive care unit (ICU). Multicenter, prospective, observational and analytical follow-up study. A total of 115 subjects in critical condition were included in this study. Clinical and nutritional parameters and severity were monitored at admission and at the seventh day of the ICU stay. A significant decrease in APACHE II and SOFA (p < 0.05) throughout the evolution of critically ill patients in ICU. In general, patients showed an alteration of most of the parameters analyzed. The status of albumin, prealbumin and transferrin were below reference levels both at admission and the 7th day in ICU. A high percentage of patients presented an unbalanced status of albumin (71.3%), prealbumin (84.3%) and transferrin (69.0%). At admission, 27% to 47% of patients with altered protein parameters had APACHE II above 18. The number of patients with altered protein parameters and APACHE II below 18 were significantly higher than severe ones throughout the ICU stay (p < 0.01). Regarding the multivariate analysis, low prealbumin status was the best predictor of severity critical (p < 0.05) both at admission and 7th day of the ICU stay. The results of the present study support the idea of including low prealbumin status as a severity predictor in APACHE II scale, due to the association found between severity and poor status of prealbumin.

  11. The Possibility of a New Critical Language from the Sources of Jewish Negative Theology

    ERIC Educational Resources Information Center

    Gur-Ze'ev, Ilan

    2010-01-01

    A new critical language is possible yet its becoming is not guaranteed. Its roots and sources should be diverse, universal and Diasporic. Jewish negative theology is ultimately Diasporic and could become one of its edifying sources. Diaspora is not only an intellectual state, not necessarily collective but communal. One of the things that makes…

  12. JAMSS: proteomics mass spectrometry simulation in Java.

    PubMed

    Smith, Rob; Prince, John T

    2015-03-01

    Countless proteomics data processing algorithms have been proposed, yet few have been critically evaluated due to lack of labeled data (data with known identities and quantities). Although labeling techniques exist, they are limited in terms of confidence and accuracy. In silico simulators have recently been used to create complex data with known identities and quantities. We propose Java Mass Spectrometry Simulator (JAMSS): a fast, self-contained in silico simulator capable of generating simulated MS and LC-MS runs while providing meta information on the provenance of each generated signal. JAMSS improves upon previous in silico simulators in terms of its ease to install, minimal parameters, graphical user interface, multithreading capability, retention time shift model and reproducibility. The simulator creates mzML 1.1.0. It is open source software licensed under the GPLv3. The software and source are available at https://github.com/optimusmoose/JAMSS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Development of thermal model to analyze thermal flux distribution in thermally enhanced machining of high chrome white cast iron

    NASA Astrophysics Data System (ADS)

    Ravi, A. M.; Murigendrappa, S. M.

    2018-04-01

    In recent times, thermally enhanced machining (TEM) slowly gearing up to cut hard metals like high chrome white cast iron (HCWCI) which were impossible in conventional procedures. Also setting up of suitable cutting parameters and positioning of the heat source against the work appears to be critical in order to enhance the machinability characteristics of the work material. In this research work, the Oxy - LPG flame was used as the heat source and HCWCI as the workpiece. ANSYS-CFD-Flow software was used to develop the transient thermal model to analyze the thermal flux distribution on the work surface during TEM of HCWCI using Cubic boron nitride (CBN) tools. Non-contact type Infrared thermo sensor was used to measure the surface temperature continuously at different positions, and is validated with the thermal model results. The result confirms thermal model is a better predictive tool for thermal flux distribution analysis in TEM process.

  14. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  15. A Geophysical Flow Experiment in a Compressible Critical Fluid

    NASA Technical Reports Server (NTRS)

    Hegseth, John; Garcia, Laudelino

    1996-01-01

    The first objective of this experiment is to build an experimental system in which, in analogy to a geophysical system, a compressible fluid in a spherical annulus becomes radially stratified in density through an A.C. electric field. When this density gradient is demonstrated, the system will be augmented so that the fluid can be driven by heating and rotation and tested in preparation for a microgravity experiment. This apparatus consists of a spherical capacitor filled with critical fluid in a temperature controlled environment. To make the fluid critical, the apparatus will be operated near the critical pressure, critical density, and critical temperature of the fluid. This will result in a highly compressible fluid because of the properties of the fluid near its critical point. A high voltage A.C. source applied across the capacitor will create a spherically symmetric central force because of the dielectric properties of the fluid in an electric field gradient. This central force will induce a spherically symmetric density gradient that is analogous to a geophysical fluid system. To generate such a density gradient the system must be small (approx. 1 inch diameter). This small cell will also be capable of driving the critical fluid by heating and rotation. Since a spherically symmetric density gradient can only be made in microgravity, another small cell, of the same geometry, will be built that uses incompressible fluid. The driving of the fluid by rotation and heating in these small cells will be developed. The resulting instabilities from the driving in these two systems will then be studied. The second objective is to study the pattern forming instabilities (bifurcations) resulting from the well controlled experimental conditions in the critical fluid cell. This experiment will come close to producing conditions that are geophysically similar and will be studied as the driving parameters are changed.

  16. Imaging stress redistribution in a high-stress mining environment using induced microseismicity and seismic tomography

    NASA Astrophysics Data System (ADS)

    Baig, A. M.; Urbancic, T.; Bosman, K.; Smith-Boughner, L.; Viegas, G. F.

    2016-12-01

    Underground excavation of ore tends to concentrate stress in the pillars of the mines. As the mining progresses, the stress tends to concentrate in these pillars resulting in potentially critical stress conditions that lead to concerns over personnel safety and has implications with regards to efficient and effective extraction criteria. It therefore becomes critical for operations to manage this stress behaviour as the extraction activities progress. In this study, we examine seismicity recorded with a full three-dimensional array consisting of single- and three-component accelerometers and geophones around the extraction volumes; this data formed the basis for characterization of stress variations. Specifically, we present an integrated study of the seismological properties of a sill pillar during the blasting of a stope to characterize how the stress is evolving in the mine. Our results suggest that the seismicity itself reacts to the stress conditions of the mining and through investigation of the source parameters, reveals how these events are being activated. Through consideration of the both the source parameters and the inter-event times and distances, we arrive at a description of the deformation of the reservoir and are able to assess the role of stress during this process. Further resolution of the stress state in the mine is obtained through inversions of moment tensors on the highest-quality microseismic data, and a descriptive analysis of event clustering by space and time to resolve the dynamics of the stress orientations. To corroborate our inferences based on microseismicity, we use blasts recorded around the extraction volume to understand how stress is manifesting itself through P-wave velocity anomalies. We confirm the dynamics of the stress field that we observe from the microseismicity and show the destressing effect of blasting coupled with stress migration through to other parts of the sill pillar.

  17. Critical analysis of industrial electron accelerators

    NASA Astrophysics Data System (ADS)

    Korenev, S.

    2004-09-01

    The critical analysis of electron linacs for industrial applications (degradation of PTFE, curing of composites, modification of materials, sterlization and others) is considered in this report. Main physical requirements for industrial electron accelerators consist in the variations of beam parameters, such as kinetic energy and beam power. Questions for regulation of these beam parameters are considered. The level of absorbed dose in the irradiated product and throughput determines the main parameters of electron accelerator. The type of ideal electron linac for industrial applications is discussed.

  18. Impact of various operating modes on performance and emission parameters of small heat source

    NASA Astrophysics Data System (ADS)

    Vician, Peter; Holubčík, Michal; Palacka, Matej; Jandačka, Jozef

    2016-06-01

    Thesis deals with the measurement of performance and emission parameters of small heat source for combustion of biomass in each of its operating modes. As the heat source was used pellet boiler with an output of 18 kW. The work includes design of experimental device for measuring the impact of changes in air supply and method for controlling the power and emission parameters of heat sources for combustion of woody biomass. The work describes the main factors that affect the combustion process and analyze the measurements of emissions at the heat source. The results of experiment demonstrate the values of performance and emissions parameters for the different operating modes of the boiler, which serve as a decisive factor in choosing the appropriate mode.

  19. Critical adsorption profiles around a sphere and a cylinder in a fluid at criticality: Local functional theory

    NASA Astrophysics Data System (ADS)

    Yabunaka, Shunsuke; Onuki, Akira

    2017-09-01

    We study universal critical adsorption on a solid sphere and a solid cylinder in a fluid at bulk criticality, where preferential adsorption occurs. We use a local functional theory proposed by Fisher et al. [M. E. Fisher and P. G. de Gennes, C. R. Acad. Sci. Paris Ser. B 287, 207 (1978); M. E. Fisher and H. Au-Yang, Physica A 101, 255 (1980), 10.1016/0378-4371(80)90112-0]. We calculate the mean order parameter profile ψ (r ) , where r is the distance from the sphere center and the cylinder axis, respectively. The resultant differential equation for ψ (r ) is solved exactly around a sphere and numerically around a cylinder. A strong adsorption regime is realized except for very small surface field h1, where the surface order parameter ψ (a ) is determined by h1 and is independent of the radius a . If r considerably exceeds a , ψ (r ) decays as r-(1 +η ) for a sphere and r-(1 +η )/2 for a cylinder in three dimensions, where η is the critical exponent in the order parameter correlation at bulk criticality.

  20. Turbulent mixing of a critical fluid: The non-perturbative renormalization

    NASA Astrophysics Data System (ADS)

    Hnatič, M.; Kalagov, G.; Nalimov, M.

    2018-01-01

    Non-perturbative Renormalization Group (NPRG) technique is applied to a stochastical model of a non-conserved scalar order parameter near its critical point, subject to turbulent advection. The compressible advecting flow is modeled by a random Gaussian velocity field with zero mean and correlation function 〈υjυi 〉 ∼ (Pji⊥ + αPji∥) /k d + ζ. Depending on the relations between the parameters ζ, α and the space dimensionality d, the model reveals several types of scaling regimes. Some of them are well known (model A of equilibrium critical dynamics and linear passive scalar field advected by a random turbulent flow), but there is a new nonequilibrium regime (universality class) associated with new nontrivial fixed points of the renormalization group equations. We have obtained the phase diagram (d, ζ) of possible scaling regimes in the system. The physical point d = 3, ζ = 4 / 3 corresponding to three-dimensional fully developed Kolmogorov's turbulence, where critical fluctuations are irrelevant, is stable for α ≲ 2.26. Otherwise, in the case of "strong compressibility" α ≳ 2.26, the critical fluctuations of the order parameter become relevant for three-dimensional turbulence. Estimations of critical exponents for each scaling regime are presented.

  1. A study on the seismic source parameters for earthquakes occurring in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Rhee, H. M.; Sheen, D. H.

    2015-12-01

    We investigated the characteristics of the seismic source parameters of the southern part of the Korean Peninsula for the 599 events with ML≥1.7 from 2001 to 2014. A large number of data are carefully selected by visual inspection in the time and frequency domains. The data set consist of 5,093 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. The corner frequency, stress drop, and moment magnitude of each event were measured by using the modified method of Jo and Baag (2001), based on the methods of Snoke (1987) and Andrews (1986). We found that this method could improve the stability of the estimation of source parameters from S-wave displacement spectrum by an iterative process. Then, we compared the source parameters with those obtained from previous studies and investigated the source scaling relationship and the regional variations of source parameters in the southern Korean Peninsula.

  2. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  3. Microstructure based simulations for prediction of flow curves and selection of process parameters for inter-critical annealing in DP steel

    NASA Astrophysics Data System (ADS)

    Deepu, M. J.; Farivar, H.; Prahl, U.; Phanikumar, G.

    2017-04-01

    Dual phase steels are versatile advanced high strength steels that are being used for sheet metal applications in automotive industry. It also has the potential for application in bulk components like gear. The inter-critical annealing in dual phase steels is one of the crucial steps that determine the mechanical properties of the material. Selection of the process parameters for inter-critical annealing, in particular, the inter-critical annealing temperature and time is important as it plays a major role in determining the volume fractions of ferrite and martensite, which in turn determines the mechanical properties. Selection of these process parameters to obtain a particular required mechanical property requires large number of experimental trials. Simulation of microstructure evolution and virtual compression/tensile testing can help in reducing the number of such experimental trials. In the present work, phase field modeling implemented in the commercial software Micress® is used to predict the microstructure evolution during inter-critical annealing. Virtual compression tests are performed on the simulated microstructure using finite element method implemented in the commercial software, to obtain the effective flow curve of the macroscopic material. The flow curves obtained by simulation are experimentally validated with physical simulation in Gleeble® and compared with that obtained using linear rule of mixture. The methodology could be used in determining the inter-critical annealing process parameters required for achieving a particular flow curve.

  4. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less

  5. Topology versus Anderson localization: Nonperturbative solutions in one dimension

    NASA Astrophysics Data System (ADS)

    Altland, Alexander; Bagrets, Dmitry; Kamenev, Alex

    2015-02-01

    We present an analytic theory of quantum criticality in quasi-one-dimensional topological Anderson insulators. We describe these systems in terms of two parameters (g ,χ ) representing localization and topological properties, respectively. Certain critical values of χ (half-integer for Z classes, or zero for Z2 classes) define phase boundaries between distinct topological sectors. Upon increasing system size, the two parameters exhibit flow similar to the celebrated two-parameter flow of the integer quantum Hall insulator. However, unlike the quantum Hall system, an exact analytical description of the entire phase diagram can be given in terms of the transfer-matrix solution of corresponding supersymmetric nonlinear sigma models. In Z2 classes we uncover a hidden supersymmetry, present at the quantum critical point.

  6. Influence of heat conducting substrates on explosive crystallization in thin layers

    NASA Astrophysics Data System (ADS)

    Schneider, Wilhelm

    2017-09-01

    Crystallization in a thin, initially amorphous layer is considered. The layer is in thermal contact with a substrate of very large dimensions. The energy equation of the layer contains source and sink terms. The source term is due to liberation of latent heat in the crystallization process, while the sink term is due to conduction of heat into the substrate. To determine the latter, the heat diffusion equation for the substrate is solved by applying Duhamel's integral. Thus, the energy equation of the layer becomes a heat diffusion equation with a time integral as an additional term. The latter term indicates that the heat loss due to the substrate depends on the history of the process. To complete the set of equations, the crystallization process is described by a rate equation for the degree of crystallization. The governing equations are then transformed to a moving co-ordinate system in order to analyze crystallization waves that propagate with invariant properties. Dual solutions are found by an asymptotic expansion for large activation energies of molecular diffusion. By introducing suitable variables, the results can be presented in a universal form that comprises the influence of all non-dimensional parameters that govern the process. Of particular interest for applications is the prediction of a critical heat loss parameter for the existence of crystallization waves with invariant properties.

  7. High-speed Particle Image Velocimetry Near Surfaces

    PubMed Central

    Lu, Louise; Sick, Volker

    2013-01-01

    Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included. PMID:23851899

  8. Experimental and computational correlation of fracture parameters KIc, JIc, and GIc for unimodular and bimodular graphite components

    NASA Astrophysics Data System (ADS)

    Bhushan, Awani; Panda, S. K.

    2018-05-01

    The influence of bimodularity (different stress ∼ strain behaviour in tension and compression) on fracture behaviour of graphite specimens has been studied with fracture toughness (KIc), critical J-integral (JIc) and critical strain energy release rate (GIc) as the characterizing parameter. Bimodularity index (ratio of tensile Young's modulus to compression Young's modulus) of graphite specimens has been obtained from the normalized test data of tensile and compression experimentation. Single edge notch bend (SENB) testing of pre-cracked specimens from the same lot have been carried out as per ASTM standard D7779-11 to determine the peak load and critical fracture parameters KIc, GIc and JIc using digital image correlation technology of crack opening displacements. Weibull weakest link theory has been used to evaluate the mean peak load, Weibull modulus and goodness of fit employing two parameter least square method (LIN2), biased (MLE2-B) and unbiased (MLE2-U) maximum likelihood estimator. The stress dependent elasticity problem of three-dimensional crack progression behaviour for the bimodular graphite components has been solved as an iterative finite element procedure. The crack characterizing parameters critical stress intensity factor and critical strain energy release rate have been estimated with the help of Weibull distribution plot between peak loads versus cumulative probability of failure. Experimental and Computational fracture parameters have been compared qualitatively to describe the significance of bimodularity. The bimodular influence on fracture behaviour of SENB graphite has been reflected on the experimental evaluation of GIc values only, which has been found to be different from the calculated JIc values. Numerical evaluation of bimodular 3D J-integral value is found to be close to the GIc value whereas the unimodular 3D J-value is nearer to the JIc value. The significant difference between the unimodular JIc and bimodular GIc indicates that GIc should be considered as the standard fracture parameter for bimodular brittle specimens.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Çataltepe, Ö. Aslan, E-mail: ozdenaslan@yahoo.com, E-mail: ozden.aslan@gedik.edu.tr; Özdemir, Z. Güven, E-mail: zguvenozdemir@yahoo.com; Onbaşlı, Ü., E-mail: phonon@doruk.net.tr

    In this work, the effect of oxygen doping on the critical parameters of the mercury based superconducting sample such as critical transition temperature, T{sub c}, critical magnetic field, H{sub c}, critical current density, J{sub c}, has been investigated by the magnetic susceptibility versus temperature (χ-T) and magnetization versus applied magnetic field (M-H) measurements and, X-Ray Diffraction (XRD) patterns. It has been observed that regardless of the oxygen doping concentration, the mercury cuprate system possesses two intrinsic superconducting phases together, HgBa{sub 2}Ca{sub 2}Cu{sub 3}O{sub 8+x} and HgBa{sub 2}CaCu{sub 2}O{sub 6+x}. However, the highest T{sub c} has been determined for the optimummore » oxygen doped sample. Moreover, it has been revealed that superconducting properties, crystal lattice parameters, coherent lengths, ξ{sub ab}, ξ{sub c} and the anisotropy factor γ etc. are very sensitive to oxygen doping procedures. Hence, the results presented this work enables one to obtain the mercury based superconductor with the most desirable criticals and other parameters for theoretical and technological applications by arranging the oxygen doping concentration.« less

  10. An Integrated Risk Management Model for Source Water Protection Areas

    PubMed Central

    Chiueh, Pei-Te; Shang, Wei-Ting; Lo, Shang-Lien

    2012-01-01

    Watersheds are recognized as the most effective management unit for the protection of water resources. For surface water supplies that use water from upstream watersheds, evaluating threats to water quality and implementing a watershed management plan are crucial for the maintenance of drinking water safe for humans. The aim of this article is to establish a risk assessment model that provides basic information for identifying critical pollutants and areas at high risk for degraded water quality. In this study, a quantitative risk model that uses hazard quotients for each water quality parameter was combined with a qualitative risk model that uses the relative risk level of potential pollution events in order to characterize the current condition and potential risk of watersheds providing drinking water. In a case study of Taipei Source Water Area in northern Taiwan, total coliforms and total phosphorus were the top two pollutants of concern. Intensive tea-growing and recreational activities around the riparian zone may contribute the greatest pollution to the watershed. Our risk assessment tool may be enhanced by developing, recording, and updating information on pollution sources in the water supply watersheds. Moreover, management authorities could use the resultant information to create watershed risk management plans. PMID:23202770

  11. Automatic source camera identification using the intrinsic lens radial distortion

    NASA Astrophysics Data System (ADS)

    Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.

    2006-11-01

    Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

  12. PyCoTools: A Python Toolbox for COPASI.

    PubMed

    Welsh, Ciaran M; Fullard, Nicola; Proctor, Carole J; Martinez-Guimera, Alvaro; Isfort, Robert J; Bascom, Charles C; Tasseff, Ryan; Przyborski, Stefan A; Shanley, Daryl P

    2018-05-22

    COPASI is an open source software package for constructing, simulating and analysing dynamic models of biochemical networks. COPASI is primarily intended to be used with a graphical user interface but often it is desirable to be able to access COPASI features programmatically, with a high level interface. PyCoTools is a Python package aimed at providing a high level interface to COPASI tasks with an emphasis on model calibration. PyCoTools enables the construction of COPASI models and the execution of a subset of COPASI tasks including time courses, parameter scans and parameter estimations. Additional 'composite' tasks which use COPASI tasks as building blocks are available for increasing parameter estimation throughput, performing identifiability analysis and performing model selection. PyCoTools supports exploratory data analysis on parameter estimation data to assist with troubleshooting model calibrations. We demonstrate PyCoTools by posing a model selection problem designed to show case PyCoTools within a realistic scenario. The aim of the model selection problem is to test the feasibility of three alternative hypotheses in explaining experimental data derived from neonatal dermal fibroblasts in response to TGF-β over time. PyCoTools is used to critically analyse the parameter estimations and propose strategies for model improvement. PyCoTools can be downloaded from the Python Package Index (PyPI) using the command 'pip install pycotools' or directly from GitHub (https://github.com/CiaranWelsh/pycotools). Documentation at http://pycotools.readthedocs.io. Supplementary data are available at Bioinformatics.

  13. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.

  14. Negative effects of item repetition on source memory.

    PubMed

    Kim, Kyungmi; Yi, Do-Joon; Raye, Carol L; Johnson, Marcia K

    2012-08-01

    In the present study, we explored how item repetition affects source memory for new item-feature associations (picture-location or picture-color). We presented line drawings varying numbers of times in Phase 1. In Phase 2, each drawing was presented once with a critical new feature. In Phase 3, we tested memory for the new source feature of each item from Phase 2. Experiments 1 and 2 demonstrated and replicated the negative effects of item repetition on incidental source memory. Prior item repetition also had a negative effect on source memory when different source dimensions were used in Phases 1 and 2 (Experiment 3) and when participants were explicitly instructed to learn source information in Phase 2 (Experiments 4 and 5). Importantly, when the order between Phases 1 and 2 was reversed, such that item repetition occurred after the encoding of critical item-source combinations, item repetition no longer affected source memory (Experiment 6). Overall, our findings did not support predictions based on item predifferentiation, within-dimension source interference, or general interference from multiple traces of an item. Rather, the findings were consistent with the idea that prior item repetition reduces attention to subsequent presentations of the item, decreasing the likelihood that critical item-source associations will be encoded.

  15. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  16. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  17. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  18. Reduced order modelling in searches for continuous gravitational waves - I. Barycentring time delays

    NASA Astrophysics Data System (ADS)

    Pitkin, M.; Doolan, S.; McMenamin, L.; Wette, K.

    2018-06-01

    The frequencies and phases of emission from extra-solar sources measured by Earth-bound observers are modulated by the motions of the observer with respect to the source, and through relativistic effects. These modulations depend critically on the source's sky-location. Precise knowledge of the modulations are required to coherently track the source's phase over long observations, for example, in pulsar timing, or searches for continuous gravitational waves. The modulations can be modelled as sky-location and time-dependent time delays that convert arrival times at the observer to the inertial frame of the source, which can often be the Solar system barycentre. We study the use of reduced order modelling for speeding up the calculation of this time delay for any sky-location. We find that the time delay model can be decomposed into just four basis vectors, and with these the delay for any sky-location can be reconstructed to sub-nanosecond accuracy. When compared to standard routines for time delay calculation in gravitational wave searches, using the reduced basis can lead to speed-ups of 30 times. We have also studied components of time delays for sources in binary systems. Assuming eccentricities <0.25, we can reconstruct the delays to within 100 s of nanoseconds, with best case speed-ups of a factor of 10, or factors of two when interpolating the basis for different orbital periods or time stamps. In long-duration phase-coherent searches for sources with sky-position uncertainties, or binary parameter uncertainties, these speed-ups could allow enhancements in their scopes without large additional computational burdens.

  19. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  20. Design and Implementation of a Web-Based Reporting and Benchmarking Center for Inpatient Glucometrics

    PubMed Central

    Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall

    2014-01-01

    Background: Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Methods: Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non–critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. Results: In all, 76 hospitals have uploaded at least 12 months of data for non–critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. Conclusions: This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. PMID:24876426

  1. Design and implementation of a web-based reporting and benchmarking center for inpatient glucometrics.

    PubMed

    Maynard, Greg; Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall

    2014-07-01

    Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non-critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. In all, 76 hospitals have uploaded at least 12 months of data for non-critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. © 2014 Diabetes Technology Society.

  2. The role of plasma/neutral source and loss processes in shaping the giant planet magnetospheres

    NASA Astrophysics Data System (ADS)

    Delamere, P. A.

    2014-12-01

    The giant planet magnetospheres are filled with neutral and ionized gases originating from satellites orbiting deep within the magnetosphere. The complex chemical and physical pathways for the flow of mass and energy in this partially ionized plasma environment is critical for understanding magnetospheric dynamics. The flow of mass at Jupiter and Saturn begins, primarily, with neutral gases emanating from Io (~1000 kg/s) and Enceladus (~200 kg/s). In addition to ionization losses, the neutral gases are absorbed by the planet, its rings, or escape at high speeds from the magnetosphere via charge exchange reactions. The net result is a centrifugally confined torus of plasma that is transported radially outward, distorting the magnetic field into a magnetodisc configuration. Ultimately the plasma is lost to the solar wind. A critical parameter for shaping the magnetodisc and determining its dynamics is the radial plasma mass transport rate (~500 kg/s and ~50 kg/s for Jupiter and Saturn respectively). Given the plasma transport rates, several simple properties of the giant magnetodiscs can be estimated including the physical scale of the magnetosphere, the magnetic flux transport, and the magnitude of azimuthal magnetic field bendback. We will discuss transport-related magnetic flux conservation and the mystery of plasma heating—two critical issues for shaping the giant planet magnetospheres.

  3. Critical behavior of subcellular density organization during neutrophil activation and migration.

    PubMed

    Baker-Groberg, Sandra M; Phillips, Kevin G; Healy, Laura D; Itakura, Asako; Porter, Juliana E; Newton, Paul K; Nan, Xiaolin; McCarty, Owen J T

    2015-12-01

    Physical theories of active matter continue to provide a quantitative understanding of dynamic cellular phenomena, including cell locomotion. Although various investigations of the rheology of cells have identified important viscoelastic and traction force parameters for use in these theoretical approaches, a key variable has remained elusive both in theoretical and experimental approaches: the spatiotemporal behavior of the subcellular density. The evolution of the subcellular density has been qualitatively observed for decades as it provides the source of image contrast in label-free imaging modalities (e.g., differential interference contrast, phase contrast) used to investigate cellular specimens. While these modalities directly visualize cell structure, they do not provide quantitative access to the structures being visualized. We present an established quantitative imaging approach, non-interferometric quantitative phase microscopy, to elucidate the subcellular density dynamics in neutrophils undergoing chemokinesis following uniform bacterial peptide stimulation. Through this approach, we identify a power law dependence of the neutrophil mean density on time with a critical point, suggesting a critical density is required for motility on 2D substrates. Next we elucidate a continuum law relating mean cell density, area, and total mass that is conserved during neutrophil polarization and migration. Together, our approach and quantitative findings will enable investigators to define the physics coupling cytoskeletal dynamics with subcellular density dynamics during cell migration.

  4. Critical behavior of subcellular density organization during neutrophil activation and migration

    PubMed Central

    Baker-Groberg, Sandra M.; Phillips, Kevin G.; Healy, Laura D.; Itakura, Asako; Porter, Juliana E.; Newton, Paul K.; Nan, Xiaolin; McCarty, Owen J.T.

    2015-01-01

    Physical theories of active matter continue to provide a quantitative understanding of dynamic cellular phenomena, including cell locomotion. Although various investigations of the rheology of cells have identified important viscoelastic and traction force parameters for use in these theoretical approaches, a key variable has remained elusive both in theoretical and experimental approaches: the spatiotemporal behavior of the subcellular density. The evolution of the subcellular density has been qualitatively observed for decades as it provides the source of image contrast in label-free imaging modalities (e.g., differential interference contrast, phase contrast) used to investigate cellular specimens. While these modalities directly visualize cell structure, they do not provide quantitative access to the structures being visualized. We present an established quantitative imaging approach, non-interferometric quantitative phase microscopy, to elucidate the subcellular density dynamics in neutrophils undergoing chemokinesis following uniform bacterial peptide stimulation. Through this approach, we identify a power law dependence of the neutrophil mean density on time with a critical point, suggesting a critical density is required for motility on 2D substrates. Next we elucidate a continuum law relating mean cell density, area, and total mass that is conserved during neutrophil polarization and migration. Together, our approach and quantitative findings will enable investigators to define the physics coupling cytoskeletal dynamics with subcellular density dynamics during cell migration. PMID:26640599

  5. Modelling short channel mosfets for use in VLSI

    NASA Technical Reports Server (NTRS)

    Klafter, Alex; Pilorz, Stuart; Polosa, Rosa Loguercio; Ruddock, Guy; Smith, Andrew

    1986-01-01

    In an investigation of metal oxide semiconductor field effect transistor (MOFSET) devices, a one-dimensional mathematical model of device dynamics was prepared, from which an accurate and computationally efficient drain current expression could be derived for subsequent parameter extraction. While a critical review revealed weaknesses in existing 1-D models (Pao-Sah, Pierret-Shields, Brews, and Van de Wiele), this new model in contrast was found to allow all the charge distributions to be continuous, to retain the inversion layer structure, and to include the contribution of current from the pinched-off part of the device. The model allows the source and drain to operate in different regimes. Numerical algorithms used for the evaluation of surface potentials in the various models are presented.

  6. Critical review of the methods for monitoring of lithium-ion batteries in electric and hybrid vehicles

    NASA Astrophysics Data System (ADS)

    Waag, Wladislaw; Fleischer, Christian; Sauer, Dirk Uwe

    2014-07-01

    Lithium-ion battery packs in hybrid and pure electric vehicles are always equipped with a battery management system (BMS). The BMS consists of hardware and software for battery management including, among others, algorithms determining battery states. The continuous determination of battery states during operation is called battery monitoring. In this paper, the methods for monitoring of the battery state of charge, capacity, impedance parameters, available power, state of health, and remaining useful life are reviewed with the focus on elaboration of their strengths and weaknesses for the use in on-line BMS applications. To this end, more than 350 sources including scientific and technical literature are studied and the respective approaches are classified in various groups.

  7. The preparation of calcium superoxide from calcium peroxide diperoxyhydrate

    NASA Technical Reports Server (NTRS)

    Ballou, E. V.; Wood, P. C.; Spitze, L. A.; Wydeven, T.

    1977-01-01

    There is interest in solid materials containing a high percentage of stored oxygen for use in emergency breathing apparatus for miners and as auxiliary oxygen sources for astronauts. In theory, the amount of available oxygen in calcium superoxide, Ca(O2)2 is higher than in potassium superoxide, KO2, and its availability during use should be unhindered by the formation of a low melting and hydrous coating. The decomposition of solid calcium peroxide diperoxyhydrate, CaO2.2H2O2 has been studied, using an apparatus which allows good control of the critical reaction parameters. Samples have been prepared showing apparent superoxide contents in excess of those previously reported and higher than the theoretical 58.4% expected from a disproportionation reaction.

  8. Phase diagram and universality of the Lennard-Jones gas-liquid system.

    PubMed

    Watanabe, Hiroshi; Ito, Nobuyasu; Hu, Chin-Kun

    2012-05-28

    The gas-liquid phase transition of the three-dimensional Lennard-Jones particles system is studied by molecular dynamics simulations. The gas and liquid densities in the coexisting state are determined with high accuracy. The critical point is determined by the block density analysis of the Binder parameter with the aid of the law of rectilinear diameter. From the critical behavior of the gas-liquid coexisting density, the critical exponent of the order parameter is estimated to be β = 0.3285(7). Surface tension is estimated from interface broadening behavior due to capillary waves. From the critical behavior of the surface tension, the critical exponent of the correlation length is estimated to be ν = 0.63(4). The obtained values of β and ν are consistent with those of the Ising universality class.

  9. Odd-Parity Superconductivity near an Inversion Breaking Quantum Critical Point in One Dimension

    DOE PAGES

    Ruhman, Jonathan; Kozii, Vladyslav; Fu, Liang

    2017-05-31

    In this work, we study how an inversion-breaking quantum critical point affects the ground state of a one-dimensional electronic liquid with repulsive interaction and spin-orbit coupling. We find that regardless of the interaction strength, the critical fluctuations always lead to a gap in the electronic spin sector. The origin of the gap is a two-particle backscattering process, which becomes relevant due to renormalization of the Luttinger parameter near the critical point. The resulting spin-gapped state is topological and can be considered as a one-dimensional version of a spin-triplet superconductor. Interestingly, in the case of a ferromagnetic critical point, the Luttingermore » parameter is renormalized in the opposite manner, such that the system remains nonsuperconducting.« less

  10. Coupled petrological-geodynamical modeling of a compositionally heterogeneous mantle plume

    NASA Astrophysics Data System (ADS)

    Rummel, Lisa; Kaus, Boris J. P.; White, Richard W.; Mertz, Dieter F.; Yang, Jianfeng; Baumann, Tobias S.

    2018-01-01

    Self-consistent geodynamic modeling that includes melting is challenging as the chemistry of the source rocks continuously changes as a result of melt extraction. Here, we describe a new method to study the interaction between physical and chemical processes in an uprising heterogeneous mantle plume by combining a geodynamic code with a thermodynamic modeling approach for magma generation and evolution. We pre-computed hundreds of phase diagrams, each of them for a different chemical system. After melt is extracted, the phase diagram with the closest bulk rock chemistry to the depleted source rock is updated locally. The petrological evolution of rocks is tracked via evolving chemical compositions of source rocks and extracted melts using twelve oxide compositional parameters. As a result, a wide variety of newly generated magmatic rocks can in principle be produced from mantle rocks with different degrees of depletion. The results show that a variable geothermal gradient, the amount of extracted melt and plume excess temperature affect the magma production and chemistry by influencing decompression melting and the depletion of rocks. Decompression melting is facilitated by a shallower lithosphere-asthenosphere boundary and an increase in the amount of extracted magma is induced by a lower critical melt fraction for melt extraction and/or higher plume temperatures. Increasing critical melt fractions activates the extraction of melts triggered by decompression at a later stage and slows down the depletion process from the metasomatized mantle. Melt compositional trends are used to determine melting related processes by focusing on K2O/Na2O ratio as indicator for the rock type that has been molten. Thus, a step-like-profile in K2O/Na2O might be explained by a transition between melting metasomatized and pyrolitic mantle components reproducible through numerical modeling of a heterogeneous asthenospheric mantle source. A potential application of the developed method is shown for the West Eifel volcanic field.

  11. Very Large Array OH Zeeman Observations of the Star-forming Region S88B

    NASA Astrophysics Data System (ADS)

    Sarma, A. P.; Brogan, C. L.; Bourke, T. L.; Eftimova, M.; Troland, T. H.

    2013-04-01

    We present observations of the Zeeman effect in OH thermal absorption main lines at 1665 and 1667 MHz taken with the Very Large Array toward the star-forming region S88B. The OH absorption profiles toward this source are complicated, and contain several blended components toward a number of positions. Almost all of the OH absorbing gas is located in the eastern parts of S88B, toward the compact continuum source S88B-2 and the eastern parts of the extended continuum source S88B-1. The ratio of 1665/1667 MHz OH line intensities indicates the gas is likely highly clumped, in agreement with other molecular emission line observations in the literature. S88-B appears to present a similar geometry to the well-known star-forming region M17, in that there is an edge-on eastward progression from ionized to molecular gas. The detected magnetic fields appear to mirror this eastward transition; we detected line-of-sight magnetic fields ranging from 90 to 400 μG, with the lowest values of the field to the southwest of the S88B-1 continuum peak, and the highest values to its northeast. We used the detected fields to assess the importance of the magnetic field in S88B by a number of methods; we calculated the ratio of thermal to magnetic pressures, we calculated the critical field necessary to completely support the cloud against self-gravity and compared it to the observed field, and we calculated the ratio of mass to magnetic flux in terms of the critical value of this parameter. All these methods indicated that the magnetic field in S88B is dynamically significant, and should provide an important source of support against gravity. Moreover, the magnetic energy density is in approximate equipartition with the turbulent energy density, again pointing to the importance of the magnetic field in this region.

  12. New Comment on Gibbs Density Surface of Fluid Argon: Revised Critical Parameters, L. V. Woodcock, Int. J. Thermophys. (2014) 35, 1770-1784

    NASA Astrophysics Data System (ADS)

    Umirzakov, I. H.

    2018-01-01

    The author comments on an article by Woodcock (Int J Thermophys 35:1770-1784, 2014), who investigates the idea of a critical line instead of a single critical point using the example of argon. In the introduction, Woodcock states that "The Van der Waals critical point does not comply with the Gibbs phase rule. Its existence is based upon a hypothesis rather than a thermodynamic definition". The present comment is a response to the statement by Woodcock. The comment mathematically demonstrates that a critical point is not only based on a hypothesis that is used to define values of two parameters of the Van der Waals equation of state. Instead, the author argues that a critical point is a direct consequence of the thermodynamic phase equilibrium conditions resulting in a single critical point. It is shown that the thermodynamic conditions result in the first and second partial derivatives of pressure with respect to volume at constant temperature at a critical point equal to zero which are usual conditions of an existence of a critical point.

  13. A computationally inexpensive model for estimating dimensional measurement uncertainty due to x-ray computed tomography instrument misalignments

    NASA Astrophysics Data System (ADS)

    Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim

    2018-06-01

    The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.

  14. Hybrid phase transition into an absorbing state: Percolation and avalanches

    NASA Astrophysics Data System (ADS)

    Lee, Deokjae; Choi, S.; Stippinger, M.; Kertész, J.; Kahng, B.

    2016-04-01

    Interdependent networks are more fragile under random attacks than simplex networks, because interlayer dependencies lead to cascading failures and finally to a sudden collapse. This is a hybrid phase transition (HPT), meaning that at the transition point the order parameter has a jump but there are also critical phenomena related to it. Here we study these phenomena on the Erdős-Rényi and the two-dimensional interdependent networks and show that the hybrid percolation transition exhibits two kinds of critical behaviors: divergence of the fluctuations of the order parameter and power-law size distribution of finite avalanches at a transition point. At the transition point global or "infinite" avalanches occur, while the finite ones have a power law size distribution; thus the avalanche statistics also has the nature of a HPT. The exponent βm of the order parameter is 1 /2 under general conditions, while the value of the exponent γm characterizing the fluctuations of the order parameter depends on the system. The critical behavior of the finite avalanches can be described by another set of exponents, βa and γa. These two critical behaviors are coupled by a scaling law: 1 -βm=γa .

  15. Polydisperse sphere packing in high dimensions, a search for an upper critical dimension

    NASA Astrophysics Data System (ADS)

    Morse, Peter; Clusel, Maxime; Corwin, Eric

    2012-02-01

    The recently introduced granocentric model for polydisperse sphere packings has been shown to be in good agreement with experimental and simulational data in two and three dimensions. This model relies on two effective parameters that have to be estimated from experimental/simulational results. The non-trivial values obtained allow the model to take into account the essential effects of correlations in the packing. Once these parameters are set, the model provides a full statistical description of a sphere packing for a given polydispersity. We investigate the evolution of these effective parameters with the spatial dimension to see if, in analogy with the upper critical dimension in critical phenomena, there exists a dimension above which correlations become irrelevant and the model parameters can be fixed a priori as a function of polydispersity. This would turn the model into a proper theory of polydisperse sphere packings at that upper critical dimension. We perform infinite temperature quench simulations of frictionless polydisperse sphere packings in dimensions 2-8 using a parallel algorithm implemented on a GPGPU. We analyze the resulting packings by implementing an algorithm to calculate the additively weighted Voronoi diagram in arbitrary dimension.

  16. Weyl holographic superconductor in the Lifshitz black hole background

    NASA Astrophysics Data System (ADS)

    Mansoori, S. A. Hosseini; Mirza, B.; Mokhtari, A.; Dezaki, F. Lalehgani; Sherkatghanad, Z.

    2016-07-01

    We investigate analytically the properties of the Weyl holographic superconductor in the Lifshitz black hole background. We find that the critical temperature of the Weyl superconductor decreases with increasing Lifshitz dynamical exponent, z, indicating that condensation becomes difficult. In addition, it is found that the critical temperature and condensation operator could be affected by applying the Weyl coupling, γ. Moreover, we compute the critical magnetic field and investigate its dependence on the parameters γ and z. Finally, we show numerically that the Weyl coupling parameter γ and the Lifshitz dynamical exponent z together control the size and strength of the conductivity peak and the ratio of gap frequency over critical temperature ω g /T c .

  17. Theory of Metastable State Relaxation for Non-Critical Binary Systems with Non-Conserved Order Parameter

    NASA Technical Reports Server (NTRS)

    Izmailov, Alexander; Myerson, Allan S.

    1993-01-01

    A new mathematical ansatz for a solution of the time-dependent Ginzburg-Landau non-linear partial differential equation is developed for non-critical systems such as non-critical binary solutions (solute + solvent) described by the non-conserved scalar order parameter. It is demonstrated that in such systems metastability initiates heterogeneous solute redistribution which results in formation of the non-equilibrium singly-periodic spatial solute structure. It is found how the time-dependent period of this structure evolves in time. In addition, the critical radius r(sub c) for solute embryo of the new solute rich phase together with the metastable state lifetime t(sub c) are determined analytically and analyzed.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruhman, Jonathan; Kozii, Vladyslav; Fu, Liang

    In this work, we study how an inversion-breaking quantum critical point affects the ground state of a one-dimensional electronic liquid with repulsive interaction and spin-orbit coupling. We find that regardless of the interaction strength, the critical fluctuations always lead to a gap in the electronic spin sector. The origin of the gap is a two-particle backscattering process, which becomes relevant due to renormalization of the Luttinger parameter near the critical point. The resulting spin-gapped state is topological and can be considered as a one-dimensional version of a spin-triplet superconductor. Interestingly, in the case of a ferromagnetic critical point, the Luttingermore » parameter is renormalized in the opposite manner, such that the system remains nonsuperconducting.« less

  19. On the identification of cohesive parameters for printed metal-polymer interfaces

    NASA Astrophysics Data System (ADS)

    Heinrich, Felix; Langner, Hauke H.; Lammering, Rolf

    2017-05-01

    The mechanical behavior of printed electronics on fiber reinforced composites is investigated. A methodology based on cohesive zone models is employed, considering interfacial strengths, stiffnesses and critical strain energy release rates. A double cantilever beam test and an end notched flexure test are carried out to experimentally determine critical strain energy release rates under fracture modes I and II. Numerical simulations are performed in Abaqus 6.13 to model both tests. Applying the simulations, an inverse parameter identification is run to determine the full set of cohesive parameters.

  20. A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.

    2017-12-01

    Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.

  1. How to improve a critical performance for an ExoMars 2020 Scientific Instrument (RLS). Raman Laser Spectrometer Signal to Noise Ratio (SNR) Optimization

    NASA Astrophysics Data System (ADS)

    Canora, C. P.; Moral, A. G.; Rull, F.; Maurice, S.; Hutchinson, I.; Ramos, G.; López-Reyes, G.; Belenguer, T.; Canchal, R.; Prieto, J. A. R.; Rodriguez, P.; Santamaria, P.; Berrocal, A.; Colombo, M.; Gallago, P.; Seoane, L.; Quintana, C.; Ibarmia, S.; Zafra, J.; Saiz, J.; Santiago, A.; Marin, A.; Gordillo, C.; Escribano, D.; Sanz-Palominoa, M.

    2017-09-01

    The Raman Laser Spectrometer (RLS) is one of the Pasteur Payload instruments, within the ESA's Aurora Exploration Programme, ExoMars mission. Raman spectroscopy is based on the analysis of spectral fingerprints due to the inelastic scattering of light when interacting with matter. RLS is composed by Units: SPU (Spectrometer Unit), iOH (Internal Optical Head), and ICEU (Instrument Control and Excitation Unit) and the harnesses (EH and OH). The iOH focuses the excitation laser on the samples and collects the Raman emission from the sample via SPU (CCD) and the video data (analog) is received, digitalizing it and transmiting it to the processor module (ICEU). The main sources of noise arise from the sample, the background, and the instrument (Laser, CCD, focuss, acquisition parameters, operation control). In this last case the sources are mainly perturbations from the optics, dark signal and readout noise. Also flicker noise arising from laser emission fluctuations can be considered as instrument noise. In order to evaluate the SNR of a Raman instrument in a practical manner it is useful to perform end-to-end measurements on given standards samples. These measurements have to be compared with radiometric simulations using Raman efficiency values from literature and taking into account the different instrumental contributions to the SNR. The RLS EQM instrument performances results and its functionalities have been demonstrated in accordance with the science expectations. The Instrument obtained SNR performances in the RLS EQM will be compared experimentally and via analysis, with the Instrument Radiometric Model tool. The characterization process for SNR optimization is still on going. The operational parameters and RLS algorithms (fluorescence removal and acquisition parameters estimation) will be improved in future models (EQM-2) until FM Model delivery.

  2. fissioncore: A desktop-computer simulation of a fission-bomb core

    NASA Astrophysics Data System (ADS)

    Cameron Reed, B.; Rohe, Klaus

    2014-10-01

    A computer program, fissioncore, has been developed to deterministically simulate the growth of the number of neutrons within an exploding fission-bomb core. The program allows users to explore the dependence of criticality conditions on parameters such as nuclear cross-sections, core radius, number of secondary neutrons liberated per fission, and the distance between nuclei. Simulations clearly illustrate the existence of a critical radius given a particular set of parameter values, as well as how the exponential growth of the neutron population (the condition that characterizes criticality) depends on these parameters. No understanding of neutron diffusion theory is necessary to appreciate the logic of the program or the results. The code is freely available in FORTRAN, C, and Java and is configured so that modifications to accommodate more refined physical conditions are possible.

  3. Estimation of bipolar jets from accretion discs around Kerr black holes

    NASA Astrophysics Data System (ADS)

    Kumar, Rajiv; Chattopadhyay, Indranil

    2017-08-01

    We analyse flows around a rotating black hole and obtain self-consistent accretion-ejection solutions in full general relativistic prescription. Entire energy-angular momentum parameter space is investigated in the advective regime to obtain shocked and shock-free accretion solutions. Jet equations of motion are solved along the von Zeipel surfaces computed from the post-shock disc, simultaneously with the equations of accretion disc along the equatorial plane. For a given spin parameter, the mass outflow rate increases as the shock moves closer to the black hole, but eventually decreases, maximizing at some intermediate value of shock location. Interestingly, we obtain all types of possible jet solutions, for example, steady shock solution with multiple critical points, bound solution with two critical points and smooth solution with single critical point. Multiple critical points may exist in jet solution for spin parameter as ≥ 0.5. The jet terminal speed generally increases if the accretion shock forms closer to the horizon and is higher for corotating black hole than the counter-rotating and the non-rotating one. Quantitatively speaking, shocks in jet may form for spin parameter as > 0.6 and jet shocks range between 6rg and 130rg above the equatorial plane, while the jet terminal speed vj∞ > 0.35 c if Bernoulli parameter E≥1.01 for as > 0.99.

  4. The Role of the Cooling Prescription for Disk Fragmentation: Numerical Convergence and Critical Cooling Parameter in Self-gravitating Disks

    NASA Astrophysics Data System (ADS)

    Baehr, Hans; Klahr, Hubert

    2015-12-01

    Protoplanetary disks fragment due to gravitational instability when there is enough mass for self-gravitation, described by the Toomre parameter, and when heat can be lost at a rate comparable to the local dynamical timescale, described by {t}{{c}}=β {{{Ω }}}-1. Simulations of self-gravitating disks show that the cooling parameter has a rough critical value at {β }{{crit}}=3. When below {β }{{crit}}, gas overdensities will contract under their own gravity and fragment into bound objects while otherwise maintaining a steady state of gravitoturbulence. However, previous studies of the critical cooling parameter have found dependences on simulation resolution, indicating that the simulation of self-gravitating protoplanetary disks is not so straightforward. In particular, the simplicity of the cooling timescale tc prevents fragments from being disrupted by pressure support as temperatures rise. We alter the cooling law so that the cooling timescale is dependent on local surface density fluctuations, which is a means of incorporating optical depth effects into the local cooling of an object. For lower resolution simulations, this results in a lower critical cooling parameter and a disk that is more stable to gravitational stresses, suggesting that the formation of large gas giants planets in large, cool disks is generally suppressed by more realistic cooling. At our highest resolution, however, the model becomes unstable to fragmentation for cooling timescales up to β =10.

  5. Dynamic behavior of the interaction between epidemics and cascades on heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Jiang, Lurong; Jin, Xinyu; Xia, Yongxiang; Ouyang, Bo; Wu, Duanpo

    2014-12-01

    Epidemic spreading and cascading failure are two important dynamical processes on complex networks. They have been investigated separately for a long time. But in the real world, these two dynamics sometimes may interact with each other. In this paper, we explore a model combined with the SIR epidemic spreading model and a local load sharing cascading failure model. There exists a critical value of the tolerance parameter for which the epidemic with high infection probability can spread out and infect a fraction of the network in this model. When the tolerance parameter is smaller than the critical value, the cascading failure cuts off the abundance of paths and blocks the spreading of the epidemic locally. While the tolerance parameter is larger than the critical value, the epidemic spreads out and infects a fraction of the network. A method for estimating the critical value is proposed. In simulations, we verify the effectiveness of this method in the uncorrelated configuration model (UCM) scale-free networks.

  6. A Systematic Approach of Employing Quality by Design Principles: Risk Assessment and Design of Experiments to Demonstrate Process Understanding and Identify the Critical Process Parameters for Coating of the Ethylcellulose Pseudolatex Dispersion Using Non-Conventional Fluid Bed Process.

    PubMed

    Kothari, Bhaveshkumar H; Fahmy, Raafat; Claycamp, H Gregg; Moore, Christine M V; Chatterjee, Sharmista; Hoag, Stephen W

    2017-05-01

    The goal of this study was to utilize risk assessment techniques and statistical design of experiments (DoE) to gain process understanding and to identify critical process parameters for the manufacture of controlled release multiparticulate beads using a novel disk-jet fluid bed technology. The material attributes and process parameters were systematically assessed using the Ishikawa fish bone diagram and failure mode and effect analysis (FMEA) risk assessment methods. The high risk attributes identified by the FMEA analysis were further explored using resolution V fractional factorial design. To gain an understanding of the processing parameters, a resolution V fractional factorial study was conducted. Using knowledge gained from the resolution V study, a resolution IV fractional factorial study was conducted; the purpose of this IV study was to identify the critical process parameters (CPP) that impact the critical quality attributes and understand the influence of these parameters on film formation. For both studies, the microclimate, atomization pressure, inlet air volume, product temperature (during spraying and curing), curing time, and percent solids in the coating solutions were studied. The responses evaluated were percent agglomeration, percent fines, percent yield, bead aspect ratio, median particle size diameter (d50), assay, and drug release rate. Pyrobuttons® were used to record real-time temperature and humidity changes in the fluid bed. The risk assessment methods and process analytical tools helped to understand the novel disk-jet technology and to systematically develop models of the coating process parameters like process efficiency and the extent of curing during the coating process.

  7. Aseismic Slip of a Thin Slab Due to a Fluid Source

    NASA Astrophysics Data System (ADS)

    Aubin, P. W.; Viesca, R. C.

    2017-12-01

    We explore the effects of an increase of pore pressure on the frictional interface along the base of a thin slab. The thin slab approximation corresponds to a layer overriding a substrate in which variations along the layer's length occur over distances much greater than the layer thickness. We consider deformation that may be in-plane or anti-plane, but approximately uniform in depth, such that spatial variations of displacement (and hence, slip) occur only along one direction parallel to the interface. Such a thin-sheet model may well represent the deformation of landslides and glacial ice streams, and also serves as a first-pass for fault systems, which, while better represented by elastic half-spaces in frictional contact, nonetheless show qualitatively similar behavior. We consider that the friction coefficient at the layer's interface remains (approximately) constant, and that aseismic slip is initiated by a (line) source of fluid at constant pressure, with one-dimensional diffusion parallel to the interface. As posed, the problem yields a self-similar expansion of slip, whose extent grows proportionally to (α * t)^(1/2) (where α is the hydraulic diffusivity) and can either lag behind or outpace the fluid diffusion front. The problem is controlled by a single parameter, accounting for the friction coefficient and the initial (pre-injection) states of stress and pore pressure. The problem solution consists of the self-similar slip profile and the coefficient of proportionality for the crack-front motion. Within the problem parameter range, two end-member scenarios result: one in which the initial level of shear stress on the interface is close to the value of the pre-injection strength (critically stressed) or another in which fluid pressure is just enough to induce slip (marginally pressurized). For the critically stressed and marginally pressurized cases, the aseismic slip front lies far ahead or far behind, respectively, the fluid diffusion front. We find closed-form solutions for both end-members, and in the former case, via matched asymptotics. These solutions provide a basis to solve the general problem, which we also solve numerically for comparison. The solutions also provide a starting point for examining the progression of slip and locking following the shutoff of the fluid source.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas

    This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less

  9. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method

  10. Posture and Loading in the Pathomechanics of Carpal Tunnel Syndrome: A Review.

    PubMed

    Vignais, Nicolas; Weresch, Justin; Keir, Peter J

    2016-01-01

    Carpal tunnel syndrome is a neuropathy of the median nerve at the wrist, and represents the most common peripheral neuropathy. It has long been an issue in the workplace because of healthcare costs and loss of productivity. The two main pathomechanisms of carpal tunnel syndrome include increased hydrostatic pressure within the carpal tunnel (carpal tunnel pressure) and contact stress (or impingement). As most cases of carpal tunnel syndrome in the workplace are labelled "idiopathic", a clear understanding of the physical parameters that may act as pathomechanisms is critical for its prevention. The aim of this review is to examine and quantify the influence of posture and loading factors on the increase of carpal tunnel pressure and median nerve contact stress. Forearm, wrist, and finger postures, as well as fingertip force have significant effects on carpal tunnel pressure. Contact stress on the median nerve is mainly a product of wrist posture and musculotendinous loading. Anatomical and musculoskeletal sources have been proposed to explain these effects. This critical review provides an improved understanding of pathomechanisms and etiology underlying carpal tunnel syndrome.

  11. Investigation on the pinch point position in heat exchangers

    NASA Astrophysics Data System (ADS)

    Pan, Lisheng; Shi, Weixiu

    2016-06-01

    The pinch point is important for analyzing heat transfer in thermodynamic cycles. With the aim to reveal the importance of determining the accurate pinch point, the research on the pinch point position is carried out by theoretical method. The results show that the pinch point position depends on the parameters of the heat transfer fluids and the major fluid properties. In most cases, the pinch point locates at the bubble point for the evaporator and the dew point for the condenser. However, the pinch point shifts to the supercooled liquid state in the near critical conditions for the evaporator. Similarly, it shifts to the superheated vapor state with the condensing temperature approaching the critical temperature for the condenser. It even can shift to the working fluid entrance of the evaporator or the supercritical heater when the heat source fluid temperature is very high compared with the absorbing heat temperature. A wrong position for the pinch point may generate serious mistake. In brief, the pinch point should be founded by the iterative method in all conditions rather than taking for granted.

  12. Anisotropic Weyl symmetry and cosmology

    NASA Astrophysics Data System (ADS)

    Moon, Taeyoon; Oh, Phillial; Sohn, Jongsu

    2010-11-01

    We construct an anisotropic Weyl invariant theory in the ADM formalism and discuss its cosmological consequences. It extends the original anisotropic Weyl invariance of Hořava-Lifshitz gravity using an extra scalar field. The action is invariant under the anisotropic transformations of the space and time metric components with an arbitrary value of the critical exponent z. One of the interesting features is that the cosmological constant term maintains the anisotropic symmetry for z = -3. We also include the cosmological fluid and show that it can preserve the anisotropic Weyl invariance if the equation of state satisfies P = zρ/3. Then, we study cosmology of the Einstein-Hilbert-anisotropic Weyl (EHaW) action including the cosmological fluid, both with or without anisotropic Weyl invariance. The correlation of the critical exponent z and the equation of state parameter bar omega provides a new perspective of the cosmology. It is also shown that the EHaW action admits a late time accelerating universe for an arbitrary value of z when the anisotropic conformal invariance is broken, and the anisotropic conformal scalar field is interpreted as a possible source of dark energy.

  13. Enhancing scatterometry CD signal-to-noise ratio for 1x logic and memory challenges

    NASA Astrophysics Data System (ADS)

    Shaughnessy, Derrick; Krishnan, Shankar; Wei, Lanhua; Shchegrov, Andrei V.

    2013-04-01

    The ongoing transition from 2D to 3D structures in logic and memory has led to an increased adoption of scatterometry CD (SCD) for inline metrology. However, shrinking device dimensions in logic and high aspect ratios in memory represent primary challenges for SCD and require a significant breakthrough in improving signal-to-noise performance. We present a report on the new generation of SCD technology, enabled by a new laser-driven plasma source. The developed light source provides several key advantages over conventional arc lamps typically used in SCD applications. The plasma color temperature of the laser driven source is considerably higher than available with arc lamps resulting in >5X increase in radiance in the visible and >10X increase in radiance in the DUV when compared to sources on previous generation SCD tools while maintaining or improving source intensity noise. This high radiance across such a broad spectrum allows for the use of a single light source from 190-1700nm. When combined with other optical design changes, the higher source radiance enables reduction of measurement box size of our spectroscopic ellipsometer from 45×45um box to 25×25um box without compromising signal to noise ratio. The benefits for 1×nm SCD metrology of the additional photons across the DUV to IR spectrum have been found to be greater than the increase in source signal to noise ratio would suggest. Better light penetration in Si and poly-Si has resulted in improved sensitivity and correlation breaking for critical parameters in 1xnm FinFET and HAR flash memory structures.

  14. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  15. Ultra-precision fabrication of 500 mm long and laterally graded Ru/C multilayer mirrors for X-ray light sources.

    PubMed

    Störmer, M; Gabrisch, H; Horstmann, C; Heidorn, U; Hertlein, F; Wiesmann, J; Siewert, F; Rack, A

    2016-05-01

    X-ray mirrors are needed for beam shaping and monochromatization at advanced research light sources, for instance, free-electron lasers and synchrotron sources. Such mirrors consist of a substrate and a coating. The shape accuracy of the substrate and the layer precision of the coating are the crucial parameters that determine the beam properties required for various applications. In principal, the selection of the layer materials determines the mirror reflectivity. A single layer mirror offers high reflectivity in the range of total external reflection, whereas the reflectivity is reduced considerably above the critical angle. A periodic multilayer can enhance the reflectivity at higher angles due to Bragg reflection. Here, the selection of a suitable combination of layer materials is essential to achieve a high flux at distinct photon energies, which is often required for applications such as microtomography, diffraction, or protein crystallography. This contribution presents the current development of a Ru/C multilayer mirror prepared by magnetron sputtering with a sputtering facility that was designed in-house at the Helmholtz-Zentrum Geesthacht. The deposition conditions were optimized in order to achieve ultra-high precision and high flux in future mirrors. Input for the improved deposition parameters came from investigations by transmission electron microscopy. The X-ray optical properties were investigated by means of X-ray reflectometry using Cu- and Mo-radiation. The change of the multilayer d-spacing over the mirror dimensions and the variation of the Bragg angles were determined. The results demonstrate the ability to precisely control the variation in thickness over the whole mirror length of 500 mm thus achieving picometer-precision in the meter-range.

  16. Sequential assimilation of volcanic monitoring data to quantify eruption potential: Application to Kerinci volcano

    NASA Astrophysics Data System (ADS)

    Zhan, Yan; Gregg, Patricia M.; Chaussard, Estelle; Aoki, Yosuke

    2017-12-01

    Quantifying the eruption potential of a restless volcano requires the ability to model parameters such as overpressure and calculate the host rock stress state as the system evolves. A critical challenge is developing a model-data fusion framework to take advantage of observational data and provide updates of the volcanic system through time. The Ensemble Kalman Filter (EnKF) uses a Monte Carlo approach to assimilate volcanic monitoring data and update models of volcanic unrest, providing time-varying estimates of overpressure and stress. Although the EnKF has been proven effective to forecast volcanic deformation using synthetic InSAR and GPS data, until now, it has not been applied to assimilate data from an active volcanic system. In this investigation, the EnKF is used to provide a “hindcast” of the 2009 explosive eruption of Kerinci volcano, Indonesia. A two-sources analytical model is used to simulate the surface deformation of Kerinci volcano observed by InSAR time-series data and to predict the system evolution. A deep, deflating dike-like source reproduces the subsiding signal on the flanks of the volcano, and a shallow spherical McTigue source reproduces the central uplift. EnKF predicted parameters are used in finite element models to calculate the host-rock stress state prior to the 2009 eruption. Mohr-Coulomb failure models reveal that the shallow magma reservoir is trending towards tensile failure prior to 2009, which may be the catalyst for the 2009 eruption. Our results illustrate that the EnKF shows significant promise for future applications to forecasting the eruption potential of restless volcanoes and hind-cast the triggering mechanisms of observed eruptions.

  17. Monitoring of potentially toxic cyanobacteria using an online multi-probe in drinking water sources.

    PubMed

    Zamyadi, A; McQuaid, N; Prévost, M; Dorner, S

    2012-02-01

    Toxic cyanobacteria threaten the water quality of drinking water sources across the globe. Two such water bodies in Canada (a reservoir on the Yamaska River and a bay of Lake Champlain in Québec) were monitored using a YSI 6600 V2-4 (YSI, Yellow Springs, Ohio, USA) submersible multi-probe measuring in vivo phycocyanin (PC) and chlorophyll-a (Chl-a) fluorescence, pH, dissolved oxygen, conductivity, temperature, and turbidity in parallel. The linearity of the in vivo fluorescence PC and Chl-a probe measurements were validated in the laboratory with Microcystis aeruginosa (r(2) = 0.96 and r(2) = 0.82 respectively). Under environmental conditions, in vivo PC fluorescence was strongly correlated with extracted PC (r = 0.79) while in vivo Chl-a fluorescence had a weaker relationship with extracted Chl-a (r = 0.23). Multiple regression analysis revealed significant correlations between extracted Chl-a, extracted PC and cyanobacterial biovolume and in vivo fluorescence parameters measured by the sensors (i.e. turbidity and pH). This information will help water authorities select the in vivo parameters that are the most useful indicators for monitoring cyanobacteria. Despite highly toxic cyanobacterial bloom development 10 m from the drinking water treatment plant's (DWTP) intake on several sampling dates, low in vivo PC fluorescence, cyanobacterial biovolume, and microcystin concentrations were detected in the plant's untreated water. The reservoir's hydrodynamics appear to have prevented the transport of toxins and cells into the DWTP which would have deteriorated the water quality. The multi-probe readings and toxin analyses provided critical evidence that the DWTP's untreated water was unaffected by the toxic cyanobacterial blooms present in its source water.

  18. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  19. Ultra-precision fabrication of 500 mm long and laterally graded Ru/C multilayer mirrors for X-ray light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Störmer, M., E-mail: michael.stoermer@hzg.de; Gabrisch, H.; Horstmann, C.

    2016-05-15

    X-ray mirrors are needed for beam shaping and monochromatization at advanced research light sources, for instance, free-electron lasers and synchrotron sources. Such mirrors consist of a substrate and a coating. The shape accuracy of the substrate and the layer precision of the coating are the crucial parameters that determine the beam properties required for various applications. In principal, the selection of the layer materials determines the mirror reflectivity. A single layer mirror offers high reflectivity in the range of total external reflection, whereas the reflectivity is reduced considerably above the critical angle. A periodic multilayer can enhance the reflectivity atmore » higher angles due to Bragg reflection. Here, the selection of a suitable combination of layer materials is essential to achieve a high flux at distinct photon energies, which is often required for applications such as microtomography, diffraction, or protein crystallography. This contribution presents the current development of a Ru/C multilayer mirror prepared by magnetron sputtering with a sputtering facility that was designed in-house at the Helmholtz-Zentrum Geesthacht. The deposition conditions were optimized in order to achieve ultra-high precision and high flux in future mirrors. Input for the improved deposition parameters came from investigations by transmission electron microscopy. The X-ray optical properties were investigated by means of X-ray reflectometry using Cu- and Mo-radiation. The change of the multilayer d-spacing over the mirror dimensions and the variation of the Bragg angles were determined. The results demonstrate the ability to precisely control the variation in thickness over the whole mirror length of 500 mm thus achieving picometer-precision in the meter-range.« less

  20. The Role of Parents' Critical Thinking About Media in Shaping Expectancies, Efficacy and Nutrition Behaviors for Families.

    PubMed

    Austin, Erica Weintraub; Pinkleton, Bruce E; Radanielina-Hita, Marie Louise; Ran, Weina

    2015-01-01

    A convenience survey completed online by 137 4-H parents in Washington state explored their orientation toward critical thinking regarding media sources and content and its implications for family dietary behaviors. Parents' critical thinking toward media sources predicted their information efficacy about content. Critical thinking toward media content predicted information efficacy about sources, expectancies for parental mediation, and expectancies for family receptiveness to lower-fat dietary changes. Expectancies for receptiveness to dietary changes and expectancies for parental mediation predicted efficacy for implementing healthy dietary practices; this strongly predicted healthy dietary practices. Media-related critical thinking, therefore, indirectly but consistently affected self-reported family dietary behaviors through its effects on efficacy for managing media and expectancies for the family's receptiveness to healthy dietary changes. The results suggest parents' media literacy skills affect their family's dietary behavior. Health campaigns that help parents interpret and manage the media environment may benefit all family members.

  1. Source parameters controlling the generation and propagation of potential local tsunamis along the cascadia margin

    USGS Publications Warehouse

    Geist, E.; Yoshioka, S.

    1996-01-01

    The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.

  2. About the Modeling of Radio Source Time Series as Linear Splines

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  3. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  4. Near-Fault Broadband Ground Motion Simulations Using Empirical Green's Functions: Application to the Upper Rhine Graben (France-Germany) Case Study

    NASA Astrophysics Data System (ADS)

    Del Gaudio, Sergio; Hok, Sebastien; Festa, Gaetano; Causse, Mathieu; Lancieri, Maria

    2017-09-01

    Seismic hazard estimation relies classically on data-based ground motion prediction equations (GMPEs) giving the expected motion level as a function of several parameters characterizing the source and the sites of interest. However, records of moderate to large earthquakes at short distances from the faults are still rare. For this reason, it is difficult to obtain a reliable ground motion prediction for such a class of events and distances where also the largest amount of damage is usually observed. A possible strategy to fill this lack of information is to generate synthetic accelerograms based on an accurate modeling of both extended fault rupture and wave propagation process. The development of such modeling strategies is essential for estimating seismic hazard close to faults in moderate seismic activity zones, where data are even scarcer. For that reason, we selected a target site in Upper Rhine Graben (URG), at the French-German border. URG is a region where faults producing micro-seismic activity are very close to the sites of interest (e.g., critical infrastructures like supply lines, nuclear power plants, etc.) needing a careful investigation of seismic hazard. In this work, we demonstrate the feasibility of performing near-fault broadband ground motion numerical simulations in a moderate seismic activity region such as URG and discuss some of the challenges related to such an application. The modeling strategy is to couple the multi-empirical Green's function technique (multi-EGFt) with a k -2 kinematic source model. One of the advantages of the multi-EGFt is that it does not require a detailed knowledge of the propagation medium since the records of small events are used as the medium transfer function, if, at the target site, records of small earthquakes located on the target fault are available. The selection of suitable events to be used as multi-EGF is detailed and discussed in our specific situation where less number of events are available. We then showed the impact that each source parameter characterizing the k-2 model has on ground motion amplitude. Finally we performed ground motion simulations showing results for different probable earthquake scenarios in the URG. Dependency of ground motions and of their variability are analyzed at different frequencies in respect of rupture velocity, roughness degree of slip distribution (stress drop), and hypocenter location. In near-source conditions, ground motion variability is shown to be mostly governed by the uncertainty on source parameters. In our specific configuration (magnitude, distance), the directivity effect is only observed in a limited frequency range. Rather, broadband ground motions are shown to be sensitive to both average rupture velocity and its possible variability, and to slip roughness. Ending up with a comparison of simulation results and GMPEs, we conclude that source parameters and their variability should be set up carefully to obtain reliable broadband ground motion estimations. In particular, our study shows that slip roughness should be set up in respect of the target stress drop. This entails the need for a better understanding of the physics of earthquake source and its incorporation in the ground motion modeling.

  5. Local and nonlocal order parameters in the Kitaev chain

    NASA Astrophysics Data System (ADS)

    Chitov, Gennady Y.

    2018-02-01

    We have calculated order parameters for the phases of the Kitaev chain with interaction and dimerization at a special symmetric point applying the Jordan-Wigner and other duality transformations. We use string order parameters (SOPs) defined via the correlation functions of the Majorana string operators. The SOPs are mapped onto the local order parameters of some dual Hamiltonians and easily calculated. We have shown that the phase diagram of the interacting dimerized chain comprises the phases with the conventional local order as well as the phases with nonlocal SOPs. From the results for the critical indices, we infer the two-dimensional Ising universality class of criticality at the particular symmetry point where the model is exactly solvable.

  6. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  7. Mathematical models of Neospora caninum infection in dairy cattle: transmission and options for control.

    PubMed

    French, N P; Clancy, D; Davison, H C; Trees, A J

    1999-10-01

    The transmission and control of Neospora caninum infection in dairy cattle was examined using deterministic and stochastic models. Parameter estimates were derived from recent studies conducted in the UK and from the published literature. Three routes of transmission were considered: maternal vertical transmission with a high probability (0.95), horizontal transmission from infected cattle within the herd, and horizontal transmission from an independent external source. Putative infection via pooled colostrum was used as an example of within-herd horizontal transmission, and the recent finding that the dog is a definitive host of N. caninum supported the inclusion of an external independent source of infection. The predicted amount of horizontal transmission required to maintain infection at levels commonly observed in field studies in the UK and elsewhere, was consistent with that observed in studies of post-natal seroconversion (0.85-9.0 per 100 cow-years). A stochastic version of the model was used to simulate the spread of infection in herds of 100 cattle, with a mean infection prevalence similar to that observed in UK studies (around 20%). The distributions of infected and uninfected cattle corresponded closely to Normal distributions, with S.D.s of 6.3 and 7.0, respectively. Control measures were considered by altering birth, death and horizontal transmission parameters. A policy of annual culling of infected cattle very rapidly reduced the prevalence of infection, and was shown to be the most effective method of control in the short term. Not breeding replacements from infected cattle was also effective in the short term, particularly in herds with a higher turnover of cattle. However, the long-term effectiveness of these measures depended on the amount and source of horizontal infection. If the level of within-herd transmission was above a critical threshold, then a combination of reducing within-herd, and blocking external sources of transmission was required to permanently eliminate infection.

  8. Exact Critical Exponents for the Antiferromagnetic Quantum Critical Metal in Two Dimensions

    NASA Astrophysics Data System (ADS)

    Schlief, Andres; Lunts, Peter; Lee, Sung-Sik

    2017-04-01

    Unconventional metallic states which do not support well-defined single-particle excitations can arise near quantum phase transitions as strong quantum fluctuations of incipient order parameters prevent electrons from forming coherent quasiparticles. Although antiferromagnetic phase transitions occur commonly in correlated metals, understanding the nature of the strange metal realized at the critical point in layered systems has been hampered by a lack of reliable theoretical methods that take into account strong quantum fluctuations. We present a nonperturbative solution to the low-energy theory for the antiferromagnetic quantum critical metal in two spatial dimensions. Being a strongly coupled theory, it can still be solved reliably in the low-energy limit as quantum fluctuations are organized by a new control parameter that emerges dynamically. We predict the exact critical exponents that govern the universal scaling of physical observables at low temperatures.

  9. AQUATOX Data Sources Documents

    EPA Pesticide Factsheets

    Contains the data sources for parameter values of the AQUATOX model including: a bibliography for the AQUATOX data libraries and the compendia of parameter values for US Army Corps of Engineers models.

  10. Critical points of metal vapors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khomkin, A. L., E-mail: alhomkin@mail.ru; Shumikhin, A. S.

    2015-09-15

    A new method is proposed for calculating the parameters of critical points and binodals for the vapor–liquid (insulator–metal) phase transition in vapors of metals with multielectron valence shells. The method is based on a model developed earlier for the vapors of alkali metals, atomic hydrogen, and exciton gas, proceeding from the assumption that the cohesion determining the basic characteristics of metals under normal conditions is also responsible for their properties in the vicinity of the critical point. It is proposed to calculate the cohesion of multielectron atoms using well-known scaling relations for the binding energy, which are constructed for mostmore » metals in the periodic table by processing the results of many numerical calculations. The adopted model allows the parameters of critical points and binodals for the vapor–liquid phase transition in metal vapors to be calculated using published data on the properties of metals under normal conditions. The parameters of critical points have been calculated for a large number of metals and show satisfactory agreement with experimental data for alkali metals and with available estimates for all other metals. Binodals of metals have been calculated for the first time.« less

  11. Critical mass of public goods and its coevolution with cooperation

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Wang, Bing-Hong

    2017-07-01

    In this study, the enhancing parameter represented the value of the public goods to the public in public goods game, and was rescaled to a Fermi-Dirac distribution function of critical mass. Public goods were divided into two categories, consumable and reusable public goods, and their coevolution with cooperative behavior was studied. We observed that for both types of public goods, cooperation was promoted as the enhancing parameter increased when the value of critical mass was not very large. An optimal value of critical mass which led to the best cooperation was identified. We also found that cooperations emerged earlier for reusable public goods, and defections became extinct earlier for the consumable public goods. Moreover, we observed that a moderate depreciation rate for public goods resulted in an optimal cooperation, and this range became wider as the enhancing parameter increased. The noise influence on cooperation was studied, and it was shown that cooperation density varied non-monotonically as noise amplitude increased for reusable public goods, whereas decreased monotonically for consumable public goods. Furthermore, existence of the optimal critical mass was also identified in other three regular networks. Finally, simulation results were utilized to analyze the provision of public goods in detail.

  12. Buckling analysis of planar compression micro-springs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jing; Sui, Li; Shi, Gengchen

    2015-04-15

    Large compression deformation causes micro-springs buckling and loss of load capacity. We analyzed the impact of structural parameters and boundary conditions for planar micro-springs, and obtained the change rules for the two factors that affect buckling. A formula for critical buckling deformation of micro-springs under compressive load was derived based on elastic thin plate theory. Results from this formula were compared with finite element analysis results but these did not always correlate. Therefore, finite element analysis is necessary for micro-spring buckling analysis. We studied the variation of micro-spring critical buckling deformation caused by four structural parameters using ANSYS software undermore » two constraint conditions. The simulation results show that when an x-direction constraint is added, the critical buckling deformation increases by 32.3-297.9%. The critical buckling deformation decreases with increase in micro-spring arc radius or section width and increases with increase in micro-spring thickness or straight beam width. We conducted experiments to confirm the simulation results, and the experimental and simulation trends were found to agree. Buckling analysis of the micro-spring establishes a theoretical foundation for optimizing micro-spring structural parameters and constraint conditions to maximize the critical buckling load.« less

  13. Bootstrap percolation on spatial networks

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao; Hu, Yanqing

    2015-10-01

    Bootstrap percolation is a general representation of some networked activation process, which has found applications in explaining many important social phenomena, such as the propagation of information. Inspired by some recent findings on spatial structure of online social networks, here we study bootstrap percolation on undirected spatial networks, with the probability density function of long-range links’ lengths being a power law with tunable exponent. Setting the size of the giant active component as the order parameter, we find a parameter-dependent critical value for the power-law exponent, above which there is a double phase transition, mixed of a second-order phase transition and a hybrid phase transition with two varying critical points, otherwise there is only a second-order phase transition. We further find a parameter-independent critical value around -1, about which the two critical points for the double phase transition are almost constant. To our surprise, this critical value -1 is just equal or very close to the values of many real online social networks, including LiveJournal, HP Labs email network, Belgian mobile phone network, etc. This work helps us in better understanding the self-organization of spatial structure of online social networks, in terms of the effective function for information spreading.

  14. New perspectives on self-similarity for shallow thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Denolle, Marine A.; Shearer, Peter M.

    2016-09-01

    Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.

  15. Effect of Electron Seeding on Experimentally Measured Multipactor Discharge Threshold

    NASA Astrophysics Data System (ADS)

    Noland, Jonathan; Graves, Timothy; Lemon, Colby; Looper, Mark; Farkas, Alex

    2012-10-01

    Multipactor is a vacuum phenomenon in which electrons, moving in resonance with an externally applied electric field, impact material surfaces. If the number of secondary electrons created per primary electron impact averages more than unity, the resonant interaction can lead to an electron avalanche. Multipactor is a generally undesirable phenomenon, as it can cause local heating, absorb power, or cause detuning of RF circuits. In order to increase the probability of multipactor initiation, test facilities often employ various seeding sources such as radioactive sources (Cesium 137, Strontium 90), electron guns, or photon sources. Even with these sources, the voltage for multipactor initiation is not certain as parameters such as material type, RF pulse length, and device wall thickness can all affect seed electron flux and energy in critical gap regions, and hence the measured voltage threshold. This study investigates the effects of seed electron source type (e.g., photons versus beta particles), material type, gap size, and RF pulse length variation on multipactor threshold. In addition to the experimental work, GEANT4 simulations will be used to estimate the production rate of low energy electrons (< 5 keV) by high energy electrons and photons. A comparison of the experimental fluxes to the typical energetic photon and particle fluxes experienced by spacecraft in various orbits will also be made. Initial results indicate that for a simple, parallel plate device made of aluminum, there is no threshold variation (with seed electrons versus with no seed electrons) under continuous-wave RF exposure.

  16. The use of source memory to identify one's own episodic confusion errors.

    PubMed

    Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R

    2001-03-01

    In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.

  17. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  18. Statistics of the work done on a quantum critical system by quenching a control parameter.

    PubMed

    Silva, Alessandro

    2008-09-19

    We study the statistics of the work done on a quantum critical system by quenching a control parameter in the Hamiltonian. We elucidate the relation between the probability distribution of the work and the Loschmidt echo, a quantity emerging usually in the context of dephasing. Using this connection we characterize the statistics of the work done on a quantum Ising chain by quenching locally or globally the transverse field. We show that for local quenches starting at criticality the probability distribution of the work displays an interesting edge singularity.

  19. Consensus statement with recommendations on active surveillance inclusion criteria and definition of progression in men with localized prostate cancer: the critical role of the pathologist.

    PubMed

    Montironi, Rodolfo; Hammond, Elizabeth H; Lin, Daniel W; Gore, John L; Srigley, John R; Samaratunga, Hema; Egevad, Lars; Rubin, Mark A; Nacey, John; Klotz, Laurence; Sandler, Howard; Zietman, Anthony L; Holden, Stuart; Humphrey, Peter A; Evans, Andrew J; Delahunt, Brett; McKenney, Jesse K; Berney, Daniel; Wheeler, Thomas M; Chinnaiyan, Arul; True, Lawrence; Knudsen, Beatrice; Epstein, Jonathan I; Amin, Mahul B

    2014-12-01

    Active surveillance (AS) is an important management option for men with low-risk, clinically localized prostate cancer. The clinical parameters for patient selection and definition of progression for AS protocols are evolving as data from several large cohorts matures. Vital to this process is the critical role pathologic parameters play in identifying appropriate candidates for AS. These findings need to be reproducible and consistently reported by surgical pathologists. This report highlights the importance of accurate pathology reporting as a critical component of these protocols.

  20. Earthquake source parameters determined by the SAFOD Pilot Hole seismic array

    USGS Publications Warehouse

    Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.

    2004-01-01

    We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.

  1. Calculation of the wetting parameter from a cluster model in the framework of nanothermodynamics.

    PubMed

    García-Morales, V; Cervera, J; Pellicer, J

    2003-06-01

    The critical wetting parameter omega(c) determines the strength of interfacial fluctuations in critical wetting transitions. In this Brief Report, we calculate omega(c) from considerations on critical liquid clusters inside a vapor phase. The starting point is a cluster model developed by Hill and Chamberlin in the framework of nanothermodynamics [Proc. Natl. Acad. Sci. USA 95, 12779 (1998)]. Our calculations yield results for omega(c) between 0.52 and 1.00, depending on the degrees of freedom considered. The findings are in agreement with previous experimental results and give an idea of the universal dynamical behavior of the clusters when approaching criticality. We suggest that this behavior is a combination of translation and vortex rotational motion (omega(c)=0.84).

  2. The interprocess NIR sampling as an alternative approach to multivariate statistical process control for identifying sources of product-quality variability.

    PubMed

    Marković, Snežana; Kerč, Janez; Horvat, Matej

    2017-03-01

    We are presenting a new approach of identifying sources of variability within a manufacturing process by NIR measurements of samples of intermediate material after each consecutive unit operation (interprocess NIR sampling technique). In addition, we summarize the development of a multivariate statistical process control (MSPC) model for the production of enteric-coated pellet product of the proton-pump inhibitor class. By developing provisional NIR calibration models, the identification of critical process points yields comparable results to the established MSPC modeling procedure. Both approaches are shown to lead to the same conclusion, identifying parameters of extrusion/spheronization and characteristics of lactose that have the greatest influence on the end-product's enteric coating performance. The proposed approach enables quicker and easier identification of variability sources during manufacturing process, especially in cases when historical process data is not straightforwardly available. In the presented case the changes of lactose characteristics are influencing the performance of the extrusion/spheronization process step. The pellet cores produced by using one (considered as less suitable) lactose source were on average larger and more fragile, leading to consequent breakage of the cores during subsequent fluid bed operations. These results were confirmed by additional experimental analyses illuminating the underlying mechanism of fracture of oblong pellets during the pellet coating process leading to compromised film coating.

  3. Microelectromechanical Systems (MEMS) Broadband Light Source Developed

    NASA Technical Reports Server (NTRS)

    Tuma, Margaret L.

    2003-01-01

    A miniature, low-power broadband light source has been developed for aerospace applications, including calibrating spectrometers and powering miniature optical sensors. The initial motivation for this research was based on flight tests of a Fabry-Perot fiberoptic temperature sensor system used to detect aircraft engine exhaust gas temperature. Although the feasibility of the sensor system was proven, the commercial light source optically powering the device was identified as a critical component requiring improvement. Problems with the light source included a long stabilization time (approximately 1 hr), a large amount of heat generation, and a large input electrical power (6.5 W). Thus, we developed a new light source to enable the use of broadband optical sensors in aerospace applications. Semiconductor chip-based light sources, such as lasers and light-emitting diodes, have a relatively narrow range of emission wavelengths in comparison to incandescent sources. Incandescent light sources emit broadband radiation from visible to infrared wavelengths; the intensity at each wavelength is determined by the filament temperature and the materials chosen for the filament and the lamp window. However, present commercial incandescent light sources are large in size and inefficient, requiring several watts of electrical power to obtain the desired optical power, and they emit a large percentage of the input power as heat that must be dissipated. The miniature light source, developed jointly by the NASA Glenn Research Center, the Jet Propulsion Laboratory, and the Lighting Innovations Institute, requires one-fifth the electrical input power of some commercial light sources, while providing similar output light power that is easily coupled to an optical fiber. Furthermore, it is small, rugged, and lightweight. Microfabrication technology was used to reduce the size, weight, power consumption, and potential cost-parameters critical to future aerospace applications. This chip-based light source has the potential for monolithic fabrication with on-chip drive electronics. Other uses for these light sources are in systems for vehicle navigation, remote sensing applications such as monitoring bridges for stress, calibration sources for spectrometers, light sources for space sensors, display lighting, addressable arrays, and industrial plant monitoring. Two methods for filament fabrication are being developed: wet-chemical etching and laser ablation. Both yield a 25-mm-thick tungsten spiral filament. The proof-of-concept filament shown was fabricated with the wet etch method. Then it was tested by heating it in a vacuum chamber using about 1.25 W of electrical power; it generated bright, blackbody radiation at approximately 2650 K. The filament was packaged in Glenn's clean-room facilities. This design uses three chips vacuum-sealed with glass tape. The bottom chip consists of a reflective film deposited on silicon, the middle chip contains a tungsten filament bonded to silicon, and the top layer is a transparent window. Lifetime testing on the package will begin shortly. The emitted optical power is expected to be approximately 1.0 W with the spectral peak at 1.1 mm.

  4. Overcoming the sign problem at finite temperature: Quantum tensor network for the orbital eg model on an infinite square lattice

    NASA Astrophysics Data System (ADS)

    Czarnik, Piotr; Dziarmaga, Jacek; Oleś, Andrzej M.

    2017-07-01

    The variational tensor network renormalization approach to two-dimensional (2D) quantum systems at finite temperature is applied to a model suffering the notorious quantum Monte Carlo sign problem—the orbital eg model with spatially highly anisotropic orbital interactions. Coarse graining of the tensor network along the inverse temperature β yields a numerically tractable 2D tensor network representing the Gibbs state. Its bond dimension D —limiting the amount of entanglement—is a natural refinement parameter. Increasing D we obtain a converged order parameter and its linear susceptibility close to the critical point. They confirm the existence of finite order parameter below the critical temperature Tc, provide a numerically exact estimate of Tc, and give the critical exponents within 1 % of the 2D Ising universality class.

  5. Numerical Simulation of Several Tectonic Tsunami Sources at the Caribbean Basin

    NASA Astrophysics Data System (ADS)

    Chacon-Barrantes, S. E.; Lopez, A. M.; Macias, J.; Zamora, N.; Moore, C. W.; Llorente Isidro, M.

    2016-12-01

    The Tsunami Hazard Assessment Working Group (WG2) of the Intergovernmental Coordination Group for the Tsunami and Other Coastal Hazards Early Warning System for the Caribbean and Adjacent Regions (ICG/CARIBE-EWS), has been tasked to identify tsunami sources for the Caribbean region and evaluate their effects along Caribbean coasts. A list of tectonic sources was developed and presented at the Fall 2015 AGU meeting and the WG2 is currently working on a list of non-tectonic sources. In addition, three Experts Meetings have already been held in 2016 to define worst-case, most credible scenarios for southern Hispaniola and Central America. The WG2 has been tasked to simulate these scenarios to provide an estimate of the resulting effects on coastal areas within the Caribbean. In this study we simulated tsunamis with two leading numerical models (NEOWAVE and Tsunami-HySEA) to compare results among them and report on the consequences for the Caribbean region if a tectonically-induced tsunami occurs in any of these postulated sources. The considered sources are located offshore Central America, at the North Panamá Deformed Belt (NPDB), at the South Caribbean Deformed Belt (SCDB) and around La Hispaniola Island. Results obtained in this study are critical to develop a catalog of scenarios that can be used in future CaribeWave exercises, as well as their usage for ICG/CARIBE-EWS member states as input to model tsunami inundation for their coastal locations. Data from inundation parameters are an additional step to produce tsunami evacuation maps, and develop plans and procedures to increase tsunami awareness and preparedness within the Caribbean.

  6. Air drying modelling of Mastocarpus stellatus seaweed a source of hybrid carrageenan

    NASA Astrophysics Data System (ADS)

    Arufe, Santiago; Torres, Maria D.; Chenlo, Francisco; Moreira, Ramon

    2018-01-01

    Water sorption isotherms from 5 up to 65 °C and air drying kinetics at 35, 45 and 55 °C of Mastocarpus stellatus seaweed were determined. Experimental sorption data were modelled using BET and Oswin models. A four-parameter model, based on Oswin model, was proposed to estimate equilibrium moisture content as function of water activity and temperature simultaneously. Drying experiments showed that water removal rate increased significantly with temperature from 35 to 45 °C, but at higher temperatures drying rate remained constant. Some chemical modifications of the hybrid carrageenans present in the seaweed can be responsible of this unexpected thermal trend. Experimental drying data were modelled using two-parameter Page model (n, k). Page parameter n was constant (1.31 ± 0.10) at tested temperatures, but k varied significantly with drying temperature (from 18.5 ± 0.2 10-3 min-n at 35 °C up to 28.4 ± 0.8 10-3 min-n at 45 and 55 °C). Drying experiments allowed the determination of the critical moisture content of seaweed (0.87 ± 0.06 kg water (kg d.b.)-1). A diffusional model considering slab geometry was employed to determine the effective diffusion coefficient of water during the falling rate period at different temperatures.

  7. Baseline Computational Fluid Dynamics Methodology for Longitudinal-Mode Liquid-Propellant Rocket Combustion Instability

    NASA Technical Reports Server (NTRS)

    Litchford, R. J.

    2005-01-01

    A computational method for the analysis of longitudinal-mode liquid rocket combustion instability has been developed based on the unsteady, quasi-one-dimensional Euler equations where the combustion process source terms were introduced through the incorporation of a two-zone, linearized representation: (1) A two-parameter collapsed combustion zone at the injector face, and (2) a two-parameter distributed combustion zone based on a Lagrangian treatment of the propellant spray. The unsteady Euler equations in inhomogeneous form retain full hyperbolicity and are integrated implicitly in time using second-order, high-resolution, characteristic-based, flux-differencing spatial discretization with Roe-averaging of the Jacobian matrix. This method was initially validated against an analytical solution for nonreacting, isentropic duct acoustics with specified admittances at the inflow and outflow boundaries. For small amplitude perturbations, numerical predictions for the amplification coefficient and oscillation period were found to compare favorably with predictions from linearized small-disturbance theory as long as the grid exceeded a critical density (100 nodes/wavelength). The numerical methodology was then exercised on a generic combustor configuration using both collapsed and distributed combustion zone models with a short nozzle admittance approximation for the outflow boundary. In these cases, the response parameters were varied to determine stability limits defining resonant coupling onset.

  8. Integration of Mahalanobis-Taguchi system and traditional cost accounting for remanufacturing crankshaft

    NASA Astrophysics Data System (ADS)

    Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd

    2018-04-01

    Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.

  9. Critical illumination condenser for x-ray lithography

    DOEpatents

    Cohen, S.J.; Seppala, L.G.

    1998-04-07

    A critical illumination condenser system is disclosed, particularly adapted for use in extreme ultraviolet (EUV) projection lithography based on a ring field imaging system and a laser produced plasma source. The system uses three spherical mirrors and is capable of illuminating the extent of the mask plane by scanning either the primary mirror or the laser plasma source. The angles of radiation incident upon each mirror of the critical illumination condenser vary by less than eight (8) degrees. For example, the imaging system in which the critical illumination condenser is utilized has a 200 {micro}m source and requires a magnification of 26. The three spherical mirror system constitutes a two mirror inverse Cassegrain, or Schwarzschild configuration, with a 25% area obstruction (50% linear obstruction). The third mirror provides the final pupil and image relay. The mirrors include a multilayer reflective coating which is reflective over a narrow bandwidth. 6 figs.

  10. Critical illumination condenser for x-ray lithography

    DOEpatents

    Cohen, Simon J.; Seppala, Lynn G.

    1998-01-01

    A critical illumination condenser system, particularly adapted for use in extreme ultraviolet (EUV) projection lithography based on a ring field imaging system and a laser produced plasma source. The system uses three spherical mirrors and is capable of illuminating the extent of the mask plane by scanning either the primary mirror or the laser plasma source. The angles of radiation incident upon each mirror of the critical illumination condenser vary by less than eight (8) degrees. For example, the imaging system in which the critical illumination condenser is utilized has a 200 .mu.m source and requires a magnification of 26.times.. The three spherical mirror system constitutes a two mirror inverse Cassegrain, or Schwarzschild configuration, with a 25% area obstruction (50% linear obstruction). The third mirror provides the final pupil and image relay. The mirrors include a multilayer reflective coating which is reflective over a narrow bandwidth.

  11. Pharmaceutical quality by design: product and process development, understanding, and control.

    PubMed

    Yu, Lawrence X

    2008-04-01

    The purpose of this paper is to discuss the pharmaceutical Quality by Design (QbD) and describe how it can be used to ensure pharmaceutical quality. The QbD was described and some of its elements identified. Process parameters and quality attributes were identified for each unit operation during manufacture of solid oral dosage forms. The use of QbD was contrasted with the evaluation of product quality by testing alone. The QbD is a systemic approach to pharmaceutical development. It means designing and developing formulations and manufacturing processes to ensure predefined product quality. Some of the QbD elements include: Defining target product quality profile; Designing product and manufacturing processes; Identifying critical quality attributes, process parameters, and sources of variability; Controlling manufacturing processes to produce consistent quality over time. Using QbD, pharmaceutical quality is assured by understanding and controlling formulation and manufacturing variables. Product testing confirms the product quality. Implementation of QbD will enable transformation of the chemistry, manufacturing, and controls (CMC) review of abbreviated new drug applications (ANDAs) into a science-based pharmaceutical quality assessment.

  12. Assessment of On-site sanitation system on local groundwater regime in an alluvial aquifer

    NASA Astrophysics Data System (ADS)

    Quamar, Rafat; Jangam, C.; Veligeti, J.; Chintalapudi, P.; Janipella, R.

    2017-12-01

    The present study is an attempt to study the impact of the On-site sanitation system on the groundwater sources in its vicinity. The study has been undertaken in the Agra city of Yamuna sub-basin. In this context, sampling sites (3 nos) namely Pandav Nagar, Ayodhya Kunj and Laxmi Nagar were selected for sampling. The groundwater samples were analyzed for major cations, anions and faecal coliform. Critical parameters namely chloride, nitrate and Faecal coliform were considered to assess the impact of the On-site sanitation systems. The analytical results shown that except for chloride, most of the samples exceeded the Bureau of Indian Standard limits for drinking water for all the other analyzed parameters, i.e., nitrate and faecal coliform in the first two sites. In Laxmi Nagar, except for faecal coliform, all the samples are below the BIS limits. In all the three sites, faecal coliform was found in majority of the samples. A comparison of present study indicates that the contamination of groundwater in alluvial setting is less as compared to hard rock where On-site sanitation systems have been implemented.

  13. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  14. Control over dark current densities and cutoff wavelengths of GaAs/AlGaAs QWIP grown by multi-wafer MBE reactor

    NASA Astrophysics Data System (ADS)

    Roodenko, K.; Choi, K. K.; Clark, K. P.; Fraser, E. D.; Vargason, K. W.; Kuo, J.-M.; Kao, Y.-C.; Pinsukanjana, P. R.

    2016-09-01

    Performance of quantum well infrared photodetector (QWIP) device parameters such as detector cutoff wavelength and the dark current density depend strongly on the quality and the control of the epitaxy material growth. In this work, we report on a methodology to precisely control these critical material parameters for long wavelength infrared (LWIR) GaAs/AlGaAs QWIP epi wafers grown by multi-wafer production Molecular beam epitaxy (MBE). Critical growth parameters such as quantum well (QW) thickness, AlGaAs composition and QW doping level are discussed.

  15. Developing Students' Critical Reasoning About Online Health Information: A Capabilities Approach

    NASA Astrophysics Data System (ADS)

    Wiblom, Jonna; Rundgren, Carl-Johan; Andrée, Maria

    2017-11-01

    The internet has become a main source for health-related information retrieval. In addition to information published by medical experts, individuals share their personal experiences and narratives on blogs and social media platforms. Our increasing need to confront and make meaning of various sources and conflicting health information has challenged the way critical reasoning has become relevant in science education. This study addresses how the opportunities for students to develop and practice their capabilities to critically approach online health information can be created in science education. Together with two upper secondary biology teachers, we carried out a design-based study. The participating students were given an online retrieval task that included a search and evaluation of health-related online sources. After a few lessons, the students were introduced to an evaluation tool designed to support critical evaluation of health information online. Using qualitative content analysis, four themes could be discerned in the audio and video recordings of student interactions when engaging with the task. Each theme illustrates the different ways in which critical reasoning became practiced in the student groups. Without using the evaluation tool, the students struggled to overview the vast amount of information and negotiate trustworthiness. Guided by the evaluation tool, critical reasoning was practiced to handle source subjectivity and to sift out scientific information only. Rather than a generic skill and transferable across contexts, students' critical reasoning became conditioned by the multi-dimensional nature of health issues, the blend of various contexts and the shift of purpose constituted by the students.

  16. Desktop Systems for Manufacturing Carbon Nanotube Films by Chemical Vapor Deposition

    DTIC Science & Technology

    2007-06-01

    existing low cost tube furnace designs limit the researcher’s ability to fully separate critical reaction parameters such as temperature and flow...Often heated using an external resistive heater coil, a typical configuration, shown in Figure 4, might place a tube made of a non- reactive ...researcher’s ability to fully separate critical parameters such as temperature and flow profiles. Additionally, the use of heating elements external to

  17. Evaluation of FEM engineering parameters from insitu tests

    DOT National Transportation Integrated Search

    2001-12-01

    The study looked critically at insitu test methods (SPT, CPT, DMT, and PMT) as a means for developing finite element constitutive model input parameters. The first phase of the study examined insitu test derived parameters with laboratory triaxial te...

  18. Critical phenomena at the threshold of immediate merger in binary black hole systems: The extreme mass ratio case

    NASA Astrophysics Data System (ADS)

    Gundlach, Carsten; Akcay, Sarp; Barack, Leor; Nagar, Alessandro

    2012-10-01

    In numerical simulations of black hole binaries, Pretorius and Khurana [Classical Quantum Gravity 24, S83 (2007)CQGRDG0264-938110.1088/0264-9381/24/12/S07] have observed critical behavior at the threshold between scattering and immediate merger. The number of orbits scales as n≃-γln⁡|p-p*| along any one-parameter family of initial data such that the threshold is at p=p*. Hence, they conjecture that in ultrarelativistic collisions almost all the kinetic energy can be converted into gravitational waves if the impact parameter is fine-tuned to the threshold. As a toy model for the binary, they consider the geodesic motion of a test particle in a Kerr black hole spacetime, where the unstable circular geodesics play the role of critical solutions, and calculate the critical exponent γ. Here, we incorporate radiation reaction into this model using the self-force approximation. The critical solution now evolves adiabatically along a sequence of unstable circular geodesic orbits under the effect of the self-force. We confirm that almost all the initial energy and angular momentum are radiated on the critical solution. Our calculation suggests that, even for infinite initial energy, this happens over a finite number of orbits given by n∞≃0.41/η, where η is the (small) mass ratio. We derive expressions for the time spent on the critical solution, number of orbits and radiated energy as functions of the initial energy and impact parameter.

  19. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  20. Analysis of temporal decay of diffuse broadband sound fields in enclosures by decomposition in powers of an absorption parameter

    NASA Astrophysics Data System (ADS)

    Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben

    2005-09-01

    An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.

  1. Modified Denavit-Hartenberg parameters for better location of joint axis systems in robot arms

    NASA Technical Reports Server (NTRS)

    Barker, L. K.

    1986-01-01

    The Denavit-Hartenberg parameters define the relative location of successive joint axis systems in a robot arm. A recent justifiable criticism is that one of these parameters becomes extremely large when two successive joints have near-parallel rotational axes. Geometrically, this parameter then locates a joint axis system at an excessive distance from the robot arm and, computationally, leads to an ill-conditioned transformation matrix. In this paper, a simple modification (which results from constraining a transverse vector between successive joint rotational axes to be normal to one of the rotational axes, instead of both) overcomes this criticism and favorably locates the joint axis system. An example is given for near-parallel rotational axes of the elbow and shoulder joints in a robot arm. The regular and modified parameters are extracted by an algebraic method with simulated measurement data. Unlike the modified parameters, extracted values of the regular parameters are very sensitive to measurement accuracy.

  2. Critical Source Area Delineation: The representation of hydrology in effective erosion modeling.

    NASA Astrophysics Data System (ADS)

    Fowler, A.; Boll, J.; Brooks, E. S.; Boylan, R. D.

    2017-12-01

    Despite decades of conservation and millions of conservation dollars, nonpoint source sediment loading associated with agricultural disturbance continues to be a significant problem in many parts of the world. Local and national conservation organizations are interested in targeting critical source areas for control strategy implementation. Currently, conservation practices are selected and located based on the Revised Universal Soil Loss Equation (RUSLE) hillslope erosion modeling, and the National Resource Conservation Service will soon be transiting to the Watershed Erosion Predict Project (WEPP) model for the same purpose. We present an assessment of critical source areas targeted with RUSLE, WEPP and a regionally validated hydrology model, the Soil Moisture Routing (SMR) model, to compare the location of critical areas for sediment loading and the effectiveness of control strategies. The three models are compared for the Palouse dryland cropping region of the inland northwest, with un-calibrated analyses of the Kamiache watershed using publicly available soils, land-use and long-term simulated climate data. Critical source areas were mapped and the side-by-side comparison exposes the differences in the location and timing of runoff and erosion predictions. RUSLE results appear most sensitive to slope driving processes associated with infiltration excess. SMR captured saturation excess driven runoff events located at the toe slope position, while WEPP was able to capture both infiltration excess and saturation excess processes depending on soil type and management. A methodology is presented for down-scaling basin level screening to the hillslope management scale for local control strategies. Information on the location of runoff and erosion, driven by the runoff mechanism, is critical for effective treatment and conservation.

  3. Photutils: Photometry tools

    NASA Astrophysics Data System (ADS)

    Bradley, Larry; Sipocz, Brigitta; Robitaille, Thomas; Tollerud, Erik; Deil, Christoph; Vinícius, Zè; Barbary, Kyle; Günther, Hans Moritz; Bostroem, Azalee; Droettboom, Michael; Bray, Erik; Bratholm, Lars Andersen; Pickering, T. E.; Craig, Matt; Pascual, Sergio; Greco, Johnny; Donath, Axel; Kerzendorf, Wolfgang; Littlefair, Stuart; Barentsen, Geert; D'Eugenio, Francesco; Weaver, Benjamin Alan

    2016-09-01

    Photutils provides tools for detecting and performing photometry of astronomical sources. It can estimate the background and background rms in astronomical images, detect sources in astronomical images, estimate morphological parameters of those sources (e.g., centroid and shape parameters), and perform aperture and PSF photometry. Written in Python, it is an affiliated package of Astropy (ascl:1304.002).

  4. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  5. Intense steady state electron beam generator

    DOEpatents

    Hershcovitch, A.; Kovarik, V.J.; Prelec, K.

    1990-07-17

    An intense, steady state, low emittance electron beam generator is formed by operating a hollow cathode discharge plasma source at critical levels in combination with an extraction electrode and a target electrode that are operable to extract a beam of fast primary electrons from the plasma source through a negatively biased grid that is critically operated to repel bulk electrons toward the plasma source while allowing the fast primary electrons to move toward the target in the desired beam that can be successfully transported for relatively large distances, such as one or more meters away from the plasma source. 2 figs.

  6. Intense steady state electron beam generator

    DOEpatents

    Hershcovitch, Ady; Kovarik, Vincent J.; Prelec, Krsto

    1990-01-01

    An intense, steady state, low emittance electron beam generator is formed by operating a hollow cathode discharge plasma source at critical levels in combination with an extraction electrode and a target electrode that are operable to extract a beam of fast primary electrons from the plasma source through a negatively biased grid that is critically operated to repel bulk electrons toward the plasma source while allowing the fast primary electrons to move toward the target in the desired beam that can be successfully transported for relatively large distances, such as one or more meters away from the plasma source.

  7. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.

  8. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  9. Bottled SAFT: A Web App Providing SAFT-γ Mie Force Field Parameters for Thousands of Molecular Fluids.

    PubMed

    Ervik, Åsmund; Mejía, Andrés; Müller, Erich A

    2016-09-26

    Coarse-grained molecular simulation has become a popular tool for modeling simple and complex fluids alike. The defining aspects of a coarse grained model are the force field parameters, which must be determined for each particular fluid. Because the number of molecular fluids of interest in nature and in engineering processes is immense, constructing force field parameter tables by individually fitting to experimental data is a futile task. A step toward solving this challenge was taken recently by Mejía et al., who proposed a correlation that provides SAFT-γ Mie force field parameters for a fluid provided one knows the critical temperature, the acentric factor and a liquid density, all relatively accessible properties. Building on this, we have applied the correlation to more than 6000 fluids, and constructed a web application, called "Bottled SAFT", which makes this data set easily searchable by CAS number, name or chemical formula. Alternatively, the application allows the user to calculate parameters for components not present in the database. Once the intermolecular potential has been found through Bottled SAFT, code snippets are provided for simulating the desired substance using the "raaSAFT" framework, which leverages established molecular dynamics codes to run the simulations. The code underlying the web application is written in Python using the Flask microframework; this allows us to provide a modern high-performance web app while also making use of the scientific libraries available in Python. Bottled SAFT aims at taking the complexity out of obtaining force field parameters for a wide range of molecular fluids, and facilitates setting up and running coarse-grained molecular simulations. The web application is freely available at http://www.bottledsaft.org . The underlying source code is available on Bitbucket under a permissive license.

  10. How robust are the natural history parameters used in chlamydia transmission dynamic models? A systematic review.

    PubMed

    Davies, Bethan; Anderson, Sarah-Jane; Turner, Katy M E; Ward, Helen

    2014-01-30

    Transmission dynamic models linked to economic analyses often form part of the decision making process when introducing new chlamydia screening interventions. Outputs from these transmission dynamic models can vary depending on the values of the parameters used to describe the infection. Therefore these values can have an important influence on policy and resource allocation. The risk of progression from infection to pelvic inflammatory disease has been extensively studied but the parameters which govern the transmission dynamics are frequently neglected. We conducted a systematic review of transmission dynamic models linked to economic analyses of chlamydia screening interventions to critically assess the source and variability of the proportion of infections that are asymptomatic, the duration of infection and the transmission probability. We identified nine relevant studies in Pubmed, Embase and the Cochrane database. We found that there is a wide variation in their natural history parameters, including an absolute difference in the proportion of asymptomatic infections of 25% in women and 75% in men, a six-fold difference in the duration of asymptomatic infection and a four-fold difference in the per act transmission probability. We consider that much of this variation can be explained by a lack of consensus in the literature. We found that a significant proportion of parameter values were referenced back to the early chlamydia literature, before the introduction of nucleic acid modes of diagnosis and the widespread testing of asymptomatic individuals. In conclusion, authors should use high quality contemporary evidence to inform their parameter values, clearly document their assumptions and make appropriate use of sensitivity analysis. This will help to make models more transparent and increase their utility to policy makers.

  11. Research on Matching Method of Power Supply Parameters for Dual Energy Source Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Jiang, Q.; Luo, M. J.; Zhang, S. K.; Liao, M. W.

    2018-03-01

    A new type of power source is proposed, which is based on the traffic signal matching method of the dual energy source power supply composed of the batteries and the supercapacitors. First, analyzing the power characteristics is required to meet the excellent dynamic characteristics of EV, studying the energy characteristics is required to meet the mileage requirements and researching the physical boundary characteristics is required to meet the physical conditions of the power supply. Secondly, the parameter matching design with the highest energy efficiency is adopted to select the optimal parameter group with the method of matching deviation. Finally, the simulation analysis of the vehicle is carried out in MATLABSimulink, The mileage and energy efficiency of dual energy sources are analyzed in different parameter models, and the rationality of the matching method is verified.

  12. Meta-Analysis of the Effect of Overexpression of Dehydration-Responsive Element Binding Family Genes on Temperature Stress Tolerance and Related Responses

    PubMed Central

    Dong, Chao; Ma, Yuanchun; Zheng, Dan; Wisniewski, Michael; Cheng, Zong-Ming

    2018-01-01

    Dehydration-responsive element binding proteins are transcription factors that play a critical role in plant response to temperature stress. Over-expression of DREB genes has been demonstrated to enhance temperature stress tolerance. A series of physiological and biochemical modifications occur in a complex and integrated way when plants respond to temperature stress, which makes it difficult to assess the mechanism underlying the DREB enhancement of stress tolerance. A meta-analysis was conducted of the effect of DREB overexpression on temperature stress tolerance and the various parameters modulated by overexpression that were statistically quantified in 75 published articles. The meta-analysis was conducted to identify the overall influence of DREB on stress-related parameters in transgenic plants, and to determine how different experimental variables affect the impact of DREB overexpression. Viewed across all the examined studies, 7 of the 8 measured plant parameters were significantly (p ≤ 0.05) modulated in DREB-transgenic plants when they were subjected to temperature stress, while 2 of the 8 parameters were significantly affected in non-stressed control plants. The measured parameters were modulated by 32% or more by various experimental variables. The modulating variables included, acclimated or non-acclimated, type of promoter, stress time and severity, source of the donor gene, and whether the donor and recipient were the same genus. These variables all had a significant effect on the observed impact of DREB overexpression. Further studies should be conducted under field conditions to better understand the role of DREB transcription factors in enhancing plant tolerance to temperature stress. PMID:29896212

  13. Comparative physiology of mice and rats: radiometric measurement of vascular parameters in rodent tissues.

    PubMed

    Boswell, C Andrew; Mundo, Eduardo E; Ulufatu, Sheila; Bumbaca, Daniela; Cahaya, Hendry S; Majidy, Nicholas; Van Hoy, Marjie; Schweiger, Michelle G; Fielder, Paul J; Prabhu, Saileta; Khawli, Leslie A

    2014-05-05

    A solid understanding of physiology is beneficial in optimizing drug delivery and in the development of clinically predictive models of drug disposition kinetics. Although an abundance of data exists in the literature, it is often confounded by the use of various experimental methods and a lack of consensus in values from different sources. To help address this deficiency, we sought to directly compare three important vascular parameters at the tissue level using the same experimental approach in both mice and rats. Interstitial volume, vascular volume, and blood flow were radiometrically measured in selected harvested tissues of both species by extracellular marker infusion, red blood cell labeling, and rubidium chloride bolus distribution, respectively. The latter two parameters were further compared by whole-body autoradiographic imaging. An overall good interspecies agreement was observed for interstitial volume and blood flow on a weight-normalized basis in most tissues. In contrast, the measured vascular volumes of most rat tissues were higher than for mouse. Mice and rats, the two most commonly utilized rodent species in translational drug development, should not be considered as interchangeable in terms of vascular volume per gram of tissue. This will be particularly critical in biodistribution studies of drugs, as the amount of drug in the residual blood of tissues is often not negligible, especially for biologic drugs (e.g., antibodies) having long circulation half-lives. Physiologically based models of drug pharmacokinetics and/or pharmacodynamics also rely on accurate knowledge of biological parameters in tissues. For tissue parameters with poor interspecies agreement, the significance and possible drivers are discussed.

  14. An open, object-based modeling approach for simulating subsurface heterogeneity

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  15. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions.

    PubMed

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-10-15

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Influence of source batch S{sub K} dispersion on dosimetry for prostate cancer treatment with permanent implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuñez-Cumplido, E., E-mail: ejnc-mccg@hotmail.com; Hernandez-Armas, J.; Perez-Calatayud, J.

    2015-08-15

    Purpose: In clinical practice, specific air kerma strength (S{sub K}) value is used in treatment planning system (TPS) permanent brachytherapy implant calculations with {sup 125}I and {sup 103}Pd sources; in fact, commercial TPS provide only one S{sub K} input value for all implanted sources and the certified shipment average is typically used. However, the value for S{sub K} is dispersed: this dispersion is not only due to the manufacturing process and variation between different source batches but also due to the classification of sources into different classes according to their S{sub K} values. The purpose of this work is tomore » examine the impact of S{sub K} dispersion on typical implant parameters that are used to evaluate the dose volume histogram (DVH) for both planning target volume (PTV) and organs at risk (OARs). Methods: The authors have developed a new algorithm to compute dose distributions with different S{sub K} values for each source. Three different prostate volumes (20, 30, and 40 cm{sup 3}) were considered and two typical commercial sources of different radionuclides were used. Using a conventional TPS, clinically accepted calculations were made for {sup 125}I sources; for the palladium, typical implants were simulated. To assess the many different possible S{sub K} values for each source belonging to a class, the authors assigned an S{sub K} value to each source in a randomized process 1000 times for each source and volume. All the dose distributions generated for each set of simulations were assessed through the DVH distributions comparing with dose distributions obtained using a uniform S{sub K} value for all the implanted sources. The authors analyzed several dose coverage (V{sub 100} and D{sub 90}) and overdosage parameters for prostate and PTV and also the limiting and overdosage parameters for OARs, urethra and rectum. Results: The parameters analyzed followed a Gaussian distribution for the entire set of computed dosimetries. PTV and prostate V{sub 100} and D{sub 90} variations ranged between 0.2% and 1.78% for both sources. Variations for the overdosage parameters V{sub 150} and V{sub 200} compared to dose coverage parameters were observed and, in general, variations were larger for parameters related to {sup 125}I sources than {sup 103}Pd sources. For OAR dosimetry, variations with respect to the reference D{sub 0.1cm{sup 3}} were observed for rectum values, ranging from 2% to 3%, compared with urethra values, which ranged from 1% to 2%. Conclusions: Dose coverage for prostate and PTV was practically unaffected by S{sub K} dispersion, as was the maximum dose deposited in the urethra due to the implant technique geometry. However, the authors observed larger variations for the PTV V{sub 150}, rectum V{sub 100}, and rectum D{sub 0.1cm{sup 3}} values. The variations in rectum parameters were caused by the specific location of sources with S{sub K} value that differed from the average in the vicinity. Finally, on comparing the two sources, variations were larger for {sup 125}I than for {sup 103}Pd. This is because for {sup 103}Pd, a greater number of sources were used to obtain a valid dose distribution than for {sup 125}I, resulting in a lower variation for each S{sub K} value for each source (because the variations become averaged out statistically speaking)« less

  17. The large earthquake on 29 June 1170 (Syria, Lebanon, and central southern Turkey)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Bernardini, Filippo; Comastri, Alberto; Boschi, Enzo

    2004-07-01

    On 29 June 1170 a large earthquake hit a vast area in the Near Eastern Mediterranean, comprising the present-day territories of western Syria, central southern Turkey, and Lebanon. Although this was one of the strongest seismic events ever to hit Syria, so far no in-depth or specific studies have been available. Furthermore, the seismological literature (from 1979 until 2000) only elaborated a partial summary of it, mainly based solely on Arabic sources. The major effects area was very partial, making the derived seismic parameters unreliable. This earthquake is in actual fact one of the most highly documented events of the medieval Mediterranean. This is due to both the particular historical period in which it had occurred (between the second and the third Crusades) and the presence of the Latin states in the territory of Syria. Some 50 historical sources, written in eight different languages, have been analyzed: Latin (major contributions), Arabic, Syriac, Armenian, Greek, Hebrew, Vulgar French, and Italian. A critical analysis of this extraordinary body of historical information has allowed us to obtain data on the effects of the earthquake at 29 locations, 16 of which were unknown in the previous scientific literature. As regards the seismic dynamics, this study has set itself the question of whether there was just one or more than one strong earthquake. In the former case, the parameters (Me 7.7 ± 0.22, epicenter, and fault length 126.2 km) were calculated. Some hypotheses are outlined concerning the seismogenic zones involved.

  18. Evaluation of Stem Cell-Derived Red Blood Cells as a Transfusion Product Using a Novel Animal Model.

    PubMed

    Shah, Sandeep N; Gelderman, Monique P; Lewis, Emily M A; Farrel, John; Wood, Francine; Strader, Michael Brad; Alayash, Abdu I; Vostal, Jaroslav G

    2016-01-01

    Reliance on volunteer blood donors can lead to transfusion product shortages, and current liquid storage of red blood cells (RBCs) is associated with biochemical changes over time, known as 'the storage lesion'. Thus, there is a need for alternative sources of transfusable RBCs to supplement conventional blood donations. Extracorporeal production of stem cell-derived RBCs (stemRBCs) is a potential and yet untapped source of fresh, transfusable RBCs. A number of groups have attempted RBC differentiation from CD34+ cells. However, it is still unclear whether these stemRBCs could eventually be effective substitutes for traditional RBCs due to potential differences in oxygen carrying capacity, viability, deformability, and other critical parameters. We have generated ex vivo stemRBCs from primary human cord blood CD34+ cells and compared them to donor-derived RBCs based on a number of in vitro parameters. In vivo, we assessed stemRBC circulation kinetics in an animal model of transfusion and oxygen delivery in a mouse model of exercise performance. Our novel, chronically anemic, SCID mouse model can evaluate the potential of stemRBCs to deliver oxygen to tissues (muscle) under resting and exercise-induced hypoxic conditions. Based on our data, stem cell-derived RBCs have a similar biochemical profile compared to donor-derived RBCs. While certain key differences remain between donor-derived RBCs and stemRBCs, the ability of stemRBCs to deliver oxygen in a living organism provides support for further development as a transfusion product.

  19. ON THE MAGNETIC FIELD OF PULSARS WITH REALISTIC NEUTRON STAR CONFIGURATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belvedere, R.; Rueda, Jorge A.; Ruffini, R., E-mail: riccardo.belvedere@icra.it, E-mail: jorge.rueda@icra.it, E-mail: ruffini@icra.it

    2015-01-20

    We have recently developed a neutron star model fulfilling global and not local charge neutrality, both in the static and in the uniformly rotating cases. The model is described by the coupled Einstein-Maxwell-Thomas-Fermi equations, in which all fundamental interactions are accounted for in the framework of general relativity and relativistic mean field theory. Uniform rotation is introduced following Hartle's formalism. We show that the use of realistic parameters of rotating neutron stars, obtained from numerical integration of the self-consistent axisymmetric general relativistic equations of equilibrium, leads to values of the magnetic field and radiation efficiency of pulsars that are verymore » different from estimates based on fiducial parameters that assume a neutron star mass M = 1.4 M {sub ☉}, radius R = 10 km, and moment of inertia I = 10{sup 45} g cm{sup 2}. In addition, we compare and contrast the magnetic field inferred from the traditional Newtonian rotating magnetic dipole model with respect to the one obtained from its general relativistic analog, which takes into account the effect of the finite size of the source. We apply these considerations to the specific high-magnetic field pulsar class and show that, indeed, all of these sources can be described as canonical pulsars driven by the rotational energy of the neutron star, and have magnetic fields lower than the quantum critical field for any value of the neutron star mass.« less

  20. A pilot study evaluating the prognostic utility of platelet indices in dogs with septic peritonitis.

    PubMed

    Llewellyn, Efa A; Todd, Jeffrey M; Sharkey, Leslie C; Rendahl, Aaron

    2017-09-01

    To characterize platelet indices at time of diagnosis of septic peritonitis in dogs and to assess the relationship between platelet parameter data and survival to discharge in dogs treated surgically. Retrospective, observational, descriptive pilot study from 2009 to 2014. University teaching hospital. Forty-eight dogs diagnosed with septic peritonitis were included in this study. Thirty-six dogs had surgical source control. Blood samples from 46 healthy control dogs were used for reference interval (RI) generation. None. Dogs with septic peritonitis had significantly increased mean values for mean platelet volume (MPV), plateletcrit (PCT), and platelet distribution width (PDW) with increased proportions of dogs having values above the RI compared to healthy dogs. A significantly increased proportion of dogs with septic peritonitis had platelet counts above (12.5%) and below (8.3%) the RI, with no significant difference in mean platelet count compared to healthy dogs. No significant differences in the mean platelet count, MPV, PCT, or PDW were found between survivors and nonsurvivors in dogs with surgical source control; however, dogs with MPV values above the RI had significantly increased mortality compared to dogs within the RI (P = 0.025). Values outside the RI for other platelet parameters were not associated with significant differences in mortality. Dogs with septic peritonitis have increased frequency of thrombocytosis and thrombocytopenia with increased MPV, PCT, and PDW. An increased MPV may be a useful indicator of increased risk of mortality in dogs treated surgically. © Veterinary Emergency and Critical Care Society 2017.

  1. Designer Diamonds: Applications in Iron-based Superconductors and Lanthanides

    NASA Astrophysics Data System (ADS)

    Vohra, Yogesh

    2013-06-01

    This talk will focus on the recent progress in the fabrication of designer diamond anvils as well as scientific applications of these diamonds in static high pressure research. The two critical parameters that have emerged in the microwave plasma chemical vapor deposition of designer diamond anvils are (1) the precise [100] alignment of the starting diamond substrate and (2) balancing the competing roles of parts per million levels of nitrogen and oxygen in the diamond growth plasma. The control of these parameters results in the fabrication of high quality designer diamonds with culet size in excess of 300 microns in diameter. The three different applications of designer diamond anvils will be discussed (1) simultaneous electrical resistance and crystal structure measurements using a synchrotron source on Iron-based superconductors with data on both electron and hole doped BaFe2As2 materials and other novel superconducting materials (2) high-pressure high-temperature melting studies on metals using eight-probe Ohmic heating designer diamonds and (3) high pressure low temperature studies on magnetic behavior of 4f-lanthanide metals using four-probe electrical resistance measurements and complementary neutron diffraction studies on a spallation neutron source. Future opportunities in boron-doped conducting designer diamond anvils as well as fabrication of two-stage designer diamonds for ultra high pressure experiments will also be presented. This work was supported by the Department of Energy (DOE) - National Nuclear Security Administration (NNSA) under Grant No. DE-FG52-10NA29660.

  2. Liquid-vapor phase relations in the Si-O system: A calorically constrained van der Waals-type model

    NASA Astrophysics Data System (ADS)

    Connolly, James A. D.

    2016-09-01

    This work explores the use of several van der Waals (vW)-type equations of state (EoS) for predicting vaporous phase relations and speciation in the Si-O system, with emphasis on the azeotropic boiling curve of SiO2-rich liquid. Comparison with the observed Rb and Hg boiling curves demonstrates that prediction accuracy is improved if the a-parameter of the EoS, which characterizes vW forces, is constrained by ambient pressure heat capacities. All EoS considered accurately reproduce metal boiling curve trajectories, but absent knowledge of the true critical compressibility factor, critical temperatures remain uncertain by ~500 K. The EoS plausibly represent the termination of the azeotropic boiling curve of silica-rich liquid by a critical point across which the dominant Si oxidation state changes abruptly from the tetravalent state characteristic of the liquid to the divalent state characteristic of the vapor. The azeotropic composition diverges from silica toward metal-rich compositions with increasing temperature. Consequently, silica boiling is divariant and atmospheric loss after a giant impact would enrich residual silicate liquids in reduced silicon. Two major sources of uncertainty in the boiling curve prediction are the heat capacity of silica liquid, which may decay during depolymerization from the near-Dulong-Petit limit heat capacity of the ionic liquid to value characteristic of the molecular liquid, and the unknown liquid affinity of silicon monoxide. Extremal scenarios for these uncertainties yield critical temperatures and compositions of 5200-6200 K and Si1.1O2-Si1.4O2. The lowest critical temperatures are marginally consistent with shock experiments and are therefore considered more probable.

  3. Verification of a SEU model for advanced 1-micron CMOS structures using heavy ions

    NASA Technical Reports Server (NTRS)

    Cable, J. S.; Carter, J. R.; Witteles, A. A.

    1986-01-01

    Modeling and test results are reported for 1 micron CMOS circuits. Analytical predictions are correlated with experimental data, and sensitivities to process and design variations are discussed. Unique features involved in predicting the SEU performance of these devices are described. The results show that the critical charge for upset exhibits a strong dependence on pulse width for very fast devices, and upset predictions must factor in the pulse shape. Acceptable SEU error rates can be achieved for a 1 micron bulk CMOS process. A thin retrograde well provides complete SEU immunity for N channel hits at normal incidence angle. Source interconnect resistance can be important parameter in determining upset rates, and Cf-252 testing can be a valuable tool for cost-effective SEU testing.

  4. Gyroharmonic converter as a multi-megawatt RF driver for NLC: Beam source considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, C.; Hirshfield, J.L.

    1995-06-01

    A multi-megawatt 14.28 GHz gyroharmonic converter under construction at Yale University depends critically on the parameters of an electron beam prepared using a cyclotron autoresonance accelerator (CARA). This paper extends prior analysis of CARA to find an approximate constant-of-the-motion, and to give limits to the beam energy from CARA that can be utilized in a harmonic converter. It is also shown that particles are strongly phase trapped during acceleration in CARA, and thus are insensitive to deviations from exact autoresonance. This fact greatly simplifies construction of the up-tapered guide magnetic field in the device, and augurs well for production ofmore » high-quality multi-megawatt beams using CARA. {copyright} 1995 {ital American Institute of Physics}.« less

  5. Food production and gas exchange system using blue-green alga (spirulina) for CELSS

    NASA Technical Reports Server (NTRS)

    Oguchi, Mitsuo; Otsubo, Koji; Nitta, Keiji; Hatayama, Shigeki

    1987-01-01

    In order to reduce the cultivation area required for the growth of higher plants in space adoption of algae, which have a higher photosynthetic ability, seems very suitable for obtaining oxygen and food as a useful source of high quality protein. The preliminary cultivation experiment for determining optimum cultivation conditions and for obtaining the critical design parameters of the cultivator itself was conducted. Spirulina was cultivated in the 6 liter medium containing a sodium hydrogen carbonate solution and a cultivation temperature controlled using a thermostat. Generated oxygen gas was separated using a polypropyrene porous hollow fiber membrane module. Through this experiment, oxygen gas (at a concentration of more than 46 percent) at a rate of 100 to approx. 150 ml per minute could be obtained.

  6. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  7. Intuitive model for the scintillations of a partially coherent beam

    DOE PAGES

    Efimov, Anatoly

    2014-12-23

    We developed an intuitive model for the scintillation index of a partially coherent beam in which essentially the only critical parameter is the properly defined Fresnel number equal to the ratio of the “working” aperture area to the area of the Fresnel zone. The model transpired from and is supported by numerical simulations using Rytov method for weak fluctuations regime and Tatarskii turbulence spectrum with inner scale. The ratio of the scintillation index of a partially coherent beam to that of a plane wave displays a characteristic minimum, the magnitude of which and its distance from the transmitter are easilymore » explained using the intuitive model. Furthermore, a theoretical asymptotic is found for the scintillation index of a source with decreasing coherence at this minimum.« less

  8. A mathematical model of diffusion from a steady source of short duration in a finite mixing layer

    NASA Astrophysics Data System (ADS)

    Bianconi, Roberto; Tamponi, Matteo

    This paper presents an analytical unsteady-state solution to the atmospheric dispersion equation for substances subject to chemical-physical decay in a finite mixing layer for releases of short duration. This solution is suitable for describing critical events relative to accidental release of toxic, flammable or explosive substances. To implement the solution, the Modello per Rilasci a Breve Termine (MRBT) code has been developed, for some characteristics parameters of which the results of the sensitivity analysis are presented. Moreover some examples of application to the calculation of exposure to toxic substances and to the determination of the ignition field of flammable substances are described. Finally, the mathematical model described can be used to interpret the phenomenon of pollutant accumulation.

  9. Volcanic eruption source parameters from active and passive microwave sensors

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Marzano, Frank S.; Cimini, Domenico; Mereu, Luigi

    2016-04-01

    It is well known, in the volcanology community, that precise information of the source parameters characterising an eruption are of predominant interest for the initialization of the Volcanic Transport and Dispersion Models (VTDM). Source parameters of main interest would be the top altitude of the volcanic plume, the flux of the mass ejected at the emission source, which is strictly related to the cloud top altitude, the distribution of volcanic mass concentration along the vertical column as well as the duration of the eruption and the erupted volume. Usually, the combination of a-posteriori field and numerical studies allow constraining the eruption source parameters for a given volcanic event thus making possible the forecast of ash dispersion and deposition from future volcanic eruptions. So far, remote sensors working at visible and infrared channels (cameras and radiometers) have been mainly used to detect, track and provide estimates of the concentration content and the prevailing size of the particles propagating within the ash clouds up to several thousand of kilometres far from the source as well as track back, a-posteriori, the accuracy of the VATDM outputs thus testing the initial choice made for the source parameters. Acoustic wave (infrasound) and microwave fixed scan radar (voldorad) were also used to infer source parameters. In this work we want to put our attention on the role of sensors operating at microwave wavelengths as complementary tools for the real time estimations of source parameters. Microwaves can benefit of the operability during night and day and a relatively negligible sensitivity to the presence of clouds (non precipitating weather clouds) at the cost of a limited coverage and larger spatial resolution when compared with infrared sensors. Thanks to the aforementioned advantages, the products from microwaves sensors are expected to be sensible mostly to the whole path traversed along the tephra cloud making microwaves particularly appealing for estimates close to the volcano emission source. Near the source the cloud optical thickness is expected to be large enough to induce saturation effects at the infrared sensor receiver thus vanishing the brightness temperature difference methods for the ash cloud identification. In the light of the introduction above, some case studies at Eyjafjallajökull 2010 (Iceland), Etna (Italy) and Calbuco (Cile), on 5-10 May 2010, 23rd Nov., 2013 and 23 Apr., 2015, respectively, are analysed in terms of source parameter estimates (manly the cloud top and mass flax rate) from ground based microwave weather radar (9.6 GHz) and satellite Low Earth Orbit microwave radiometers (50 - 183 GH). A special highlight will be given to the advantages and limitations of microwave-related products with respect to more conventional tools.

  10. Phase transition with trivial quantum criticality in an anisotropic Weyl semimetal

    NASA Astrophysics Data System (ADS)

    Li, Xin; Wang, Jing-Rong; Liu, Guo-Zhu

    2018-05-01

    When a metal undergoes continuous quantum phase transition, the correlation length diverges at the critical point and the quantum fluctuation of order parameter behaves as a gapless bosonic mode. Generically, the coupling of this boson to fermions induces a variety of unusual quantum critical phenomena, such as non-Fermi liquid behavior and various emergent symmetries. Here, we perform a renormalization group analysis of the semimetal-superconductor quantum criticality in a three-dimensional anisotropic Weyl semimetal. Surprisingly, distinct from previously studied quantum critical systems, the anomalous dimension of anisotropic Weyl fermions flows to zero very quickly with decreasing energy, and the quasiparticle residue takes a nonzero value. These results indicate that the quantum fluctuation of superconducting order parameter is irrelevant at low energies, and a simple mean-field calculation suffices to capture the essential physics of the superconducting transition. We thus obtain a phase transition that exhibits trivial quantum criticality, which is unique comparing to other invariably nontrivial quantum critical systems. Our theoretical prediction can be experimentally verified by measuring the fermion spectral function and specific heat.

  11. Future research needs involving pathogens in groundwater

    NASA Astrophysics Data System (ADS)

    Bradford, Scott A.; Harvey, Ronald W.

    2017-06-01

    Contamination of groundwater by enteric pathogens has commonly been associated with disease outbreaks. Proper management and treatment of pathogen sources are important prerequisites for preventing groundwater contamination. However, non-point sources of pathogen contamination are frequently difficult to identify, and existing approaches for pathogen detection are costly and only provide semi-quantitative information. Microbial indicators that are readily quantified often do not correlate with the presence of pathogens. Pathogens of emerging concern and increasing detections of antibiotic resistance among bacterial pathogens in groundwater are topics of growing concern. Adequate removal of pathogens during soil passage is therefore critical for safe groundwater extraction. Processes that enhance pathogen transport (e.g., high velocity zones and preferential flow) and diminish pathogen removal (e.g., reversible retention and enhanced survival) are of special concern because they increase the risk of groundwater contamination, but are still incompletely understood. Improved theory and modeling tools are needed to analyze experimental data, test hypotheses, understand coupled processes and controlling mechanisms, predict spatial and/or temporal variability in model parameters and uncertainty in pathogen concentrations, assess risk, and develop mitigation and best management approaches to protect groundwater.

  12. Metals in wine--impact on wine quality and health outcomes.

    PubMed

    Tariba, Blanka

    2011-12-01

    Metals in wine can originate from both natural and anthropogenic sources, and its concentration can be a significant parameter affecting consumption and conservation of wine. Since metallic ions have important role in oxide-reductive reactions resulting in wine browning, turbidity, cloudiness, and astringency, wine quality depends greatly on its metal composition. Moreover, metals in wine may affect human health. Consumption of wine may contribute to the daily dietary intake of essential metals (i.e., copper, iron, and zinc) but can also have potentially toxic effects if metal concentrations are not kept under allowable limits. Therefore, a strict analytical control of metal concentration is required during the whole process of wine production. This article presents a critical review of the existing literature regarding the measured metal concentration in wine, methods applied for their determination, and possible sources, as well as their impact on wine quality and human health. The main focus is set on aluminum, arsenic, cadmium, chromium, copper, iron, manganese, nickel, lead, and zinc, as these elements most often affect wine quality and human health.

  13. HIGH POWER BEAM DUMP AND TARGET / ACCELERATOR INTERFACE PROCEDURES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blokland, Willem; Plum, Michael A; Peters, Charles C

    Satisfying operational procedures and limits for the beam target interface is a critical concern for high power operation at spallation neutron sources. At the Oak Ridge Spallation Neutron Source (SNS) a number of protective measures are instituted to ensure that the beam position, beam size and peak intensity are within acceptable limits at the target and high power Ring Injection Dump (RID). The high power beam dump typically handles up to 50 100 kW of beam power and its setup is complicated by the fact that there are two separate beam components simultaneously directed to the dump. The beam onmore » target is typically in the 800-1000 kW average power level, delivered in sub- s 60 Hz pulses. Setup techniques using beam measurements to quantify the beam parameters at the target and dump will be described. However, not all the instrumentation used for the setup and initial qualification is available during high power operation. Additional techniques are used to monitor the beam during high power operation to ensure the setup conditions are maintained, and these are also described.« less

  14. Temperature field for radiative tomato peeling

    NASA Astrophysics Data System (ADS)

    Cuccurullo, G.; Giordano, L.

    2017-01-01

    Nowadays peeling of tomatoes is performed by using steam or lye, which are expensive and polluting techniques, thus sustainable alternatives are searched for dry peeling and, among that, radiative heating seems to be a fairly promising method. This paper aims to speed up the prediction of surface temperatures useful for realizing dry-peeling, thus a 1D-analytical model for the unsteady temperature field in a rotating tomato exposed to a radiative heating source is presented. Since only short times are of interest for the problem at hand, the model involves a semi-infinite slab cooled by convective heat transfer while heated by a pulsating heat source. The model being linear, the solution is derived following the Laplace Transform method. A 3D finite element model of the rotating tomato is introduced as well in order to validate the analytical solution. A satisfactory agreement is attained. Therefore, two different ways to predict the onset of the peeling conditions are available which can be of help for proper design of peeling plants. Particular attention is paid to study surface temperature uniformity, that being a critical parameter for realizing an easy tomato peeling.

  15. Time-resolved imaging of the microbunching instability and energy spread at the Linac Coherent Light Source

    DOE PAGES

    Ratner, D.; Behrens, C.; Ding, Y.; ...

    2015-03-09

    The microbunching instability (MBI) is a well known problem for high brightness electron beams and has been observed at accelerator facilities around the world. Free-electron lasers (FELs) are particularly susceptible to MBI, which can distort the longitudinal phase space and increase the beam’s slice energy spread (SES). Past studies of MBI at the Linac Coherent Light Source (LCLS) relied on optical transition radiation to infer the existence of microbunching. With the development of the x-band transverse deflecting cavity (XTCAV), we can for the first time directly image the longitudinal phase space at the end of the accelerator and complete amore » comprehensive study of MBI, revealing both detailed MBI behavior as well as insights into mitigation schemes. The fine time resolution of the XTCAV also provides the first LCLS measurements of the final SES, a critical parameter for many advanced FEL schemes. As a result, detailed MBI and SES measurements can aid in understanding MBI mechanisms, benchmarking simulation codes, and designing future high- brightness accelerators.« less

  16. Future research needs involving pathogens in groundwater

    USGS Publications Warehouse

    Bradford, Scott A.; Harvey, Ronald W.

    2017-01-01

    Contamination of groundwater by enteric pathogens has commonly been associated with disease outbreaks. Proper management and treatment of pathogen sources are important prerequisites for preventing groundwater contamination. However, non-point sources of pathogen contamination are frequently difficult to identify, and existing approaches for pathogen detection are costly and only provide semi-quantitative information. Microbial indicators that are readily quantified often do not correlate with the presence of pathogens. Pathogens of emerging concern and increasing detections of antibiotic resistance among bacterial pathogens in groundwater are topics of growing concern. Adequate removal of pathogens during soil passage is therefore critical for safe groundwater extraction. Processes that enhance pathogen transport (e.g., high velocity zones and preferential flow) and diminish pathogen removal (e.g., reversible retention and enhanced survival) are of special concern because they increase the risk of groundwater contamination, but are still incompletely understood. Improved theory and modeling tools are needed to analyze experimental data, test hypotheses, understand coupled processes and controlling mechanisms, predict spatial and/or temporal variability in model parameters and uncertainty in pathogen concentrations, assess risk, and develop mitigation and best management approaches to protect groundwater.

  17. Imaging strategies for the study of gas turbine spark ignition

    NASA Astrophysics Data System (ADS)

    Gord, James R.; Tyler, Charles; Grinstead, Keith D., Jr.; Fiechtner, Gregory J.; Cochran, Michael J.; Frus, John R.

    1999-10-01

    Spark-ignition systems play a critical role in the performance of essentially all gas turbine engines. These devices are responsible for initiating the combustion process that sustains engine operation. Demanding applications such as cold start and high-altitude relight require continued enhancement of ignition systems. To characterize advanced ignition systems, we have developed a number of laser-based diagnostic techniques configured for ultrafast imaging of spark parameters including emission, density, temperature, and species concentration. These diagnostics have been designed to exploit an ultrafast- framing charge-coupled-device (CCD) camera and high- repetition-rate laser sources including mode-locked Ti:sapphire oscillators and regenerative amplifiers. Spontaneous-emission and laser-shlieren measurements have been accomplished with this instrumentation and the result applied to the study of a novel Unison Industries spark igniter that shows great promise for improved cold-start and high-altitude-relight capability as compared to that of igniters currently in use throughout military and commercial fleets. Phase-locked and ultrafast real-time imaging strategies are explored, and details of the imaging instrumentation, particularly the CCD camera and laser sources, are discussed.

  18. Utility of transthoracic echocardiography (TTE) in assessing fluid responsiveness in critically ill patients - a challenge for the bedside sonographer.

    PubMed

    Mielnicki, Wojciech; Dyla, Agnieszka; Zawada, Tomasz

    2016-12-05

    Transthoracic echocardiography (TTE) has become one of the most important diagnostic tools in the treatment of critically ill patients. It allows clinicians to recognise potentially reversible life-threatening situations and is also very effective in the monitoring of the fluid status of patients, slowly substituting invasive methods in the intensive care unit. Hemodynamic assessment is based on a few static and dynamic parameters. Dynamic parameters change during the respiratory cycle in mechanical ventilation and the level of this change directly corresponds to fluid responsiveness. Most of the parameters cannot be used in spontaneously breathing patients. For these patients the most important test is passive leg raising, which is a good substitute for fluid bolus. Although TTE is very useful in the critical care setting, we should not forget the important limitations, not only technical ones but also caused by the critical illness itself. Unfortunately, this method does not allow continuous monitoring and every change in the patient's condition requires repeated examination.

  19. Flight parameter estimation using instantaneous frequency and direction of arrival measurements from a single acoustic sensor node.

    PubMed

    Lo, Kam W

    2017-03-01

    When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.

  20. The critical crossover at the n-hexane-water interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tikhonov, A. M., E-mail: tikhonov@kapitza.ras.r

    According to estimates of the parameters of the critical crossover in monolayers of long-chain alcohol molecules adsorbed at the n-hexane-water interface, all systems in which this phenomenon is observed are characterized by the same value of the critical exponent {nu} {approx} 1.8.

  1. Correlation of Electric Field and Critical Design Parameters for Ferroelectric Tunable Microwave Filters

    NASA Technical Reports Server (NTRS)

    Subramanyam, Guru; VanKeuls, Fred W.; Miranda, Felix A.; Canedy, Chadwick L.; Aggarwal, Sanjeev; Venkatesan, Thirumalai; Ramesh, Ramamoorthy

    2000-01-01

    The correlation of electric field and critical design parameters such as the insertion loss, frequency ability return loss, and bandwidth of conductor/ferroelectric/dielectric microstrip tunable K-band microwave filters is discussed in this work. This work is based primarily on barium strontium titanate (BSTO) ferroelectric thin film based tunable microstrip filters for room temperature applications. Two new parameters which we believe will simplify the evaluation of ferroelectric thin films for tunable microwave filters, are defined. The first of these, called the sensitivity parameter, is defined as the incremental change in center frequency with incremental change in maximum applied electric field (EPEAK) in the filter. The other, the loss parameter, is defined as the incremental or decremental change in insertion loss of the filter with incremental change in maximum applied electric field. At room temperature, the Au/BSTO/LAO microstrip filters exhibited a sensitivity parameter value between 15 and 5 MHz/cm/kV. The loss parameter varied for different bias configurations used for electrically tuning the filter. The loss parameter varied from 0.05 to 0.01 dB/cm/kV at room temperature.

  2. The HelCat dual-source plasma device.

    PubMed

    Lynn, Alan G; Gilmore, Mark; Watts, Christopher; Herrea, Janis; Kelly, Ralph; Will, Steve; Xie, Shuangwei; Yan, Lincan; Zhang, Yue

    2009-10-01

    The HelCat (Helicon-Cathode) device has been constructed to support a broad range of basic plasma science experiments relevant to the areas of solar physics, laboratory astrophysics, plasma nonlinear dynamics, and turbulence. These research topics require a relatively large plasma source capable of operating over a broad region of parameter space with a plasma duration up to at least several milliseconds. To achieve these parameters a novel dual-source system was developed utilizing both helicon and thermionic cathode sources. Plasma parameters of n(e) approximately 0.5-50 x 10(18) m(-3) and T(e) approximately 3-12 eV allow access to a wide range of collisionalities important to the research. The HelCat device and initial characterization of plasma behavior during dual-source operation are described.

  3. Sensitivity study of experimental measures for the nuclear liquid-gas phase transition in the statistical multifragmentation model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.

    2018-05-01

    The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.

  4. Process analytical technologies (PAT) in freeze-drying of parenteral products.

    PubMed

    Patel, Sajal Manubhai; Pikal, Michael

    2009-01-01

    Quality by Design (QbD), aims at assuring quality by proper design and control, utilizing appropriate Process Analytical Technologies (PAT) to monitor critical process parameters during processing to ensure that the product meets the desired quality attributes. This review provides a comprehensive list of process monitoring devices that can be used to monitor critical process parameters and will focus on a critical review of the viability of the PAT schemes proposed. R&D needs in PAT for freeze-drying have also been addressed with particular emphasis on batch techniques that can be used on all the dryers independent of the dryer scale.

  5. Estimated critical conditions for UO[sub 2]F[sub 2]--H[sub 2]O systems in fully water-reflected spherical geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, W.C.; Turner, J.C.

    1992-12-01

    The purpose of this report is to document reference calculations performed using the SCALE-4.0 code system to determine the critical parameters of UO[sub 2]F[sub 2]-H[sub 2]O spheres. The calculations are an extension of those documented in ORNL/CSD/TM-284. Specifically, the data for low-enriched UO[sub 2]F[sub 2]-H[sub 2]O spheres have been extended to highly enriched uranium. These calculations, together with those reported in ORNL/CSD/TM-284, provide a consistent set of critical parameters (k[sub [infinity

  6. Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities

    PubMed Central

    Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun

    2016-01-01

    This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones. PMID:27918424

  7. Protein source in a high-protein diet modulates reductions in insulin resistance and hepatic steatosis in fa/fa Zucker rats.

    PubMed

    Wojcik, Jennifer L; Devassy, Jessay G; Wu, Yinghong; Zahradka, Peter; Taylor, Carla G; Aukema, Harold M

    2016-01-01

    High-protein diets are being promoted to reduce insulin resistance and hepatic steatosis in metabolic syndrome. Therefore, the effect of protein source in high-protein diets on reducing insulin resistance and hepatic steatosis was examined. Fa/fa Zucker rats were provided normal-protein (15% of energy) casein, high-protein (35% of energy) casein, high-protein soy, or high-protein mixed diets with animal and plant proteins. The high-protein mixed diet reduced area under the curve for insulin during glucose tolerance testing, fasting serum insulin and free fatty acid concentrations, homeostatic model assessment index, insulin to glucose ratio, and pancreatic islet cell area. The high-protein mixed and the high-protein soy diets reduced hepatic lipid concentrations, liver to body weight ratio, and hepatic steatosis rating. These improvements were observed despite no differences in body weight, feed intake, or adiposity among high-protein diet groups. The high-protein casein diet had minimal benefits. A high-protein mixed diet was the most effective for modulating reductions in insulin resistance and hepatic steatosis independent of weight loss, indicating that the source of protein within a high-protein diet is critical for the management of these metabolic syndrome parameters. © 2015 The Obesity Society.

  8. Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities.

    PubMed

    Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun

    2016-12-02

    This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones.

  9. A perspective on the proliferation risks of plutonium mines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyman, E.S.

    1996-05-01

    The program of geologic disposal of spent fuel and other plutonium-containing materials is increasingly becoming the target of criticism by individuals who argue that in the future, repositories may become low-cost sources of fissile material for nuclear weapons. This paper attempts to outline a consistent framework for analyzing the proliferation risks of these so-called {open_quotes}plutonium mines{close_quotes} and putting them into perspective. First, it is emphasized that the attractiveness of plutonium in a repository as a source of weapons material depends on its accessibility relative to other sources of fissile material. Then, the notion of a {open_quotes}material production standard{close_quotes} (MPS) ismore » proposed: namely, that the proliferation risks posed by geologic disposal will be acceptable if one can demonstrate, under a number of reasonable scenarios, that the recovery of plutonium from a repository is likely to be as difficult as new production of fissile material. A preliminary analysis suggests that the range of circumstances under which current mined repository concepts would fail to meet this standard is fairly narrow. Nevertheless, a broad application of the MPS may impose severe restrictions on repository design. In this context, the relationship of repository design parameters to easy of recovery is discussed.« less

  10. Evaluating the climate benefits of CO2-enhanced oil recovery using life cycle analysis.

    PubMed

    Cooney, Gregory; Littlefield, James; Marriott, Joe; Skone, Timothy J

    2015-06-16

    This study uses life cycle analysis (LCA) to evaluate the greenhouse gas (GHG) performance of carbon dioxide (CO2) enhanced oil recovery (EOR) systems. A detailed gate-to-gate LCA model of EOR was developed and incorporated into a cradle-to-grave boundary with a functional unit of 1 MJ of combusted gasoline. The cradle-to-grave model includes two sources of CO2: natural domes and anthropogenic (fossil power equipped with carbon capture). A critical parameter is the crude recovery ratio, which describes how much crude is recovered for a fixed amount of purchased CO2. When CO2 is sourced from a natural dome, increasing the crude recovery ratio decreases emissions, the opposite is true for anthropogenic CO2. When the CO2 is sourced from a power plant, the electricity coproduct is assumed to displace existing power. With anthropogenic CO2, increasing the crude recovery ratio reduces the amount of CO2 required, thereby reducing the amount of power displaced and the corresponding credit. Only the anthropogenic EOR cases result in emissions lower than conventionally produced crude. This is not specific to EOR, rather the fact that carbon-intensive electricity is being displaced with captured electricity, and the fuel produced from that system receives a credit for this displacement.

  11. Monte Carlo Determination of Dosimetric Parameters of a New (125)I Brachytherapy Source According to AAPM TG-43 (U1) Protocol.

    PubMed

    Baghani, Hamid Reza; Lohrabian, Vahid; Aghamiri, Mahmoud Reza; Robatjazi, Mostafa

    2016-03-01

    (125)I is one of the important sources frequently used in brachytherapy. Up to now, several different commercial models of this source type have been introduced to the clinical radiation oncology applications. Recently, a new source model, IrSeed-125, has been added to this list. The aim of the present study is to determine the dosimetric parameters of this new source model based on the recommendations of TG-43 (U1) protocol using Monte Carlo simulation. The dosimetric characteristics of Ir-125 including dose rate constant, radial dose function, 2D anisotropy function and 1D anisotropy function were determined inside liquid water using MCNPX code and compared to those of other commercially available iodine sources. The dose rate constant of this new source was found to be 0.983+0.015 cGyh-1U-1 that was in good agreement with the TLD measured data (0.965 cGyh-1U-1). The 1D anisotropy function at 3, 5, and 7 cm radial distances were obtained as 0.954, 0.953 and 0.959, respectively. The results of this study showed that the dosimetric characteristics of this new brachytherapy source are comparable with those of other commercially available sources. Furthermore, the simulated parameters were in accordance with the previously measured ones. Therefore, the Monte Carlo calculated dosimetric parameters could be employed to obtain the dose distribution around this new brachytherapy source based on TG-43 (U1) protocol.

  12. Critical Thinking: The Role of Management Education. Developing Managers To Think Critically.

    ERIC Educational Resources Information Center

    Pierce, Gloria

    Emphasizing critical thinking as the source of renewal and survival of organizations, this document begins by analyzing the Exxon Valdez oil spill and the destruction of the space shuttle Challenger as examples of inadequate critical thinking. The role of management education in promoting critical thinking is explored as well as the need for a…

  13. Vehicle response-based track geometry assessment using multi-body simulation

    NASA Astrophysics Data System (ADS)

    Kraft, Sönke; Causse, Julien; Coudert, Frédéric

    2018-02-01

    The assessment of the geometry of railway tracks is an indispensable requirement for safe rail traffic. Defects which represent a risk for the safety of the train have to be identified and the necessary measures taken. According to current standards, amplitude thresholds are applied to the track geometry parameters measured by recording cars. This geometry-based assessment has proved its value but suffers from the low correlation between the geometry parameters and the vehicle reactions. Experience shows that some defects leading to critical vehicle reactions are underestimated by this approach. The use of vehicle responses in the track geometry assessment process allows identifying critical defects and improving the maintenance operations. This work presents a vehicle response-based assessment method using multi-body simulation. The choice of the relevant operation conditions and the estimation of the simulation uncertainty are outlined. The defects are identified from exceedances of track geometry and vehicle response parameters. They are then classified using clustering methods and the correlation with vehicle response is analysed. The use of vehicle responses allows the detection of critical defects which are not identified from geometry parameters.

  14. Individual phase constitutive properties of a TRIP-assisted QP980 steel from a combined synchrotron X-ray diffraction and crystal plasticity approach

    DOE PAGES

    Hu, Xiao Hua; Sun, X.; Hector, Jr., L. G.; ...

    2017-04-21

    Here, microstructure-based constitutive models for multiphase steels require accurate constitutive properties of the individual phases for component forming and performance simulations. We address this requirement with a combined experimental/theoretical methodology which determines the critical resolved shear stresses and hardening parameters of the constituent phases in QP980, a TRIP assisted steel subject to a two-step quenching and partitioning heat treatment. High energy X-Ray diffraction (HEXRD) from a synchrotron source provided the average lattice strains of the ferrite, martensite, and austenite phases from the measured volume during in situ tensile deformation. The HEXRD data was then input to a computationally efficient, elastic-plasticmore » self-consistent (EPSC) crystal plasticity model which estimated the constitutive parameters of different slip systems for the three phases via a trial-and-error approach. The EPSC-estimated parameters are then input to a finite element crystal plasticity (CPFE) model representing the QP980 tensile sample. The predicted lattice strains and global stress versus strain curves are found to be 8% lower that the EPSC model predicted values and from the HEXRD measurements, respectively. This discrepancy, which is attributed to the stiff secant assumption in the EPSC formulation, is resolved with a second step in which CPFE is used to iteratively refine the EPSC-estimated parameters. Remarkably close agreement is obtained between the theoretically-predicted and experimentally derived flow curve for the QP980 material.« less

  15. Individual phase constitutive properties of a TRIP-assisted QP980 steel from a combined synchrotron X-ray diffraction and crystal plasticity approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, X. H.; Sun, X.; Hector, L. G.

    2017-06-01

    Microstructure-based constitutive models for multiphase steels require accurate constitutive properties of the individual phases for component forming and performance simulations. We address this requirement with a combined experimental/theoretical methodology which determines the critical resolved shear stresses and hardening parameters of the constituent phases in QP980, a TRIP assisted steel subject to a two-step quenching and partitioning heat treatment. High energy X-Ray diffraction (HEXRD) from a synchrotron source provided the average lattice strains of the ferrite, martensite, and austenite phases from the measured volume during in situ tensile deformation. The HEXRD data was then input to a computationally efficient, elastic-plastic self-consistentmore » (EPSC) crystal plasticity model which estimated the constitutive parameters of different slip systems for the three phases via a trial-and-error approach. The EPSC-estimated parameters are then input to a finite element crystal plasticity (CPFE) model representing the QP980 tensile sample. The predicted lattice strains and global stress versus strain curves are found to be 8% lower that the EPSC model predicted values and from the HEXRD measurements, respectively. This discrepancy, which is attributed to the stiff secant assumption in the EPSC formulation, is resolved with a second step in which CPFE is used to iteratively refine the EPSC-estimated parameters. Remarkably close agreement is obtained between the theoretically-predicted and experimentally derived flow curve for the QP980 material.« less

  16. Radiant energy and insensible water loss in the premature newborn infant nursed under a radiant warmer.

    PubMed

    Baumgart, S

    1982-10-01

    Radiant warmers are a powerful and efficient source of heat serving to warm the cold-stressed infant acutely and to provide uninterrupted maintenance of body temperature despite a multiplicity of nursing, medical, and surgical procedures required to care for the critically ill premature newborn in today's intensive care nursery. A recognized side-effect of radiant warmer beds is the now well-documented increase in insensible water loss through evaporation from an infant's skin. Particularly the very-low-birth-weight, severely premature, and critically ill neonate is subject to this increase in evaporative water loss. The clinician caring for the infant is faced with the difficult problem of fluid and electrolyte balance, which requires vigilant monitoring of all parameters of fluid homeostasis. Compounding these difficulties, other portions of the electromagnetic spectrum (for example, phototherapy) may affect an infant's fluid metabolism by mechanisms that are not well understood. The role of plastic heat shielding in reducing large insensible losses in infants nursed on radiant warmer beds is currently under intense investigation. Apparently, convective air currents and not radiant heat energy may be the cause of the observed increase in insensible water loss in the intensive care nursery. A thin plastic blanket may be effective in reducing evaporative water loss by diminishing an infant's exposure to convective air currents while being nursed on an open radiant warmer bed. A rigid plastic body hood, although effective as a radiant heat shield, is not as effective in preventing exposure to convection in the intensive care nursery and, therefore, is not as effective as the thin plastic blanket in reducing insensible water loss. Care should be exercised in determining the effect of heat shielding on all parameters of heat exchange (convection, evaporation, and radiation) before application is made to the critically ill premature infant nursed on an open radiant warmer bed.

  17. Data-Conditioned Distributions of Groundwater Recharge Under Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    McLaughlin, D.; Ng, G. C.; Entekhabi, D.; Scanlon, B.

    2008-12-01

    Groundwater recharge is likely to be impacted by climate change, with changes in precipitation amounts altering moisture availability and changes in temperature affecting evaporative demand. This could have major implications for sustainable aquifer pumping rates and contaminant transport into groundwater reservoirs in the future, thus making predictions of recharge under climate change very important. Unfortunately, in dry environments where groundwater resources are often most critical, low recharge rates are difficult to resolve due to high sensitivity to modeling and input errors. Some recent studies on climate change and groundwater have considered recharge using a suite of general circulation model (GCM) weather predictions, an obvious and key source of uncertainty. This work extends beyond those efforts by also accounting for uncertainty in other land-surface model inputs in a probabilistic manner. Recharge predictions are made using a range of GCM projections for a rain-fed cotton site in the semi-arid Southern High Plains region of Texas. Results showed that model simulations using a range of unconstrained literature-based parameter values produce highly uncertain and often misleading recharge rates. Thus, distributional recharge predictions are found using soil and vegetation parameters conditioned on current unsaturated zone soil moisture and chloride concentration observations; assimilation of observations is carried out with an ensemble importance sampling method. Our findings show that the predicted distribution shapes can differ for the various GCM conditions considered, underscoring the importance of probabilistic analysis over deterministic simulations. The recharge predictions indicate that the temporal distribution (over seasons and rain events) of climate change will be particularly critical for groundwater impacts. Overall, changes in recharge amounts and intensity were often more pronounced than changes in annual precipitation and temperature, thus suggesting high susceptibility of groundwater systems to future climate change. Our approach provides a probabilistic sensitivity analysis of recharge under potential climate changes, which will be critical for future management of water resources.

  18. Transition to a Source with Modified Physical Parameters by Energy Supply or Using an External Force

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.

    2017-11-01

    A study has been made of the possibility for the physical parameters of a source/sink, i.e., for the enthalpy, temperature, total pressure, maximum velocity, and minimum dimension, at a constant radial Mach number to be changed by energy or force action on the gas in a bounded zone. It has been shown that the parameters can be controlled at a subsonic, supersonic, and transonic (sonic in the limit) radial Mach number. In the updated source/sink, all versions of a vortex-source combination can be implemented: into a vacuum, out of a vacuum, into a submerged space, and out of a submerged space, partially or fully.

  19. Comparative Study of Light Sources for Household

    NASA Astrophysics Data System (ADS)

    Pawlak, Andrzej; Zalesińska, Małgorzata

    2017-03-01

    The article describes test results that provided the ground to define and evaluate basic photometric, colorimetric and electric parameters of selected, widely available light sources, which are equivalent to a traditional incandescent 60-Watt light bulb. Overall, one halogen light bulb, three compact fluorescent lamps and eleven LED light sources were tested. In general, it was concluded that in most cases (branded products, in particular) the measured and calculated parameters differ from the values declared by manufacturers only to a small degree. LED sources prove to be the most beneficial substitute for traditional light bulbs, considering both their operational parameters and their price, which is comparable with the price of compact fluorescent lamps or, in some instances, even lower.

  20. Critical and subcritical damage monitoring of bonded composite repairs using innovative non-destructive techniques

    NASA Astrophysics Data System (ADS)

    Grammatikos, S. A.; Kordatos, E. Z.; Aggelis, D. G.; Matikas, T. E.; Paipetis, A. S.

    2012-04-01

    Infrared Thermography (IrT) has been shown to be capable of detecting and monitoring service induced damage of repair composite structures. Full-field imaging, along with portability are the primary benefits of the thermographic technique. On-line lock-in thermography has been reported to successfully monitor damage propagation or/and stress concentration in composite coupons, as mechanical stresses in structures induce heat concentration phenomena around flaws. During mechanical fatigue, cyclic loading plays the role of the heating source and this allows for critical and subcritical damage identification and monitoring using thermography. The Electrical Potential Change Technique (EPCT) is a new method for damage identification and monitoring during loading. The measurement of electrical potential changes at specific points of Carbon Fiber Reinforced Polymers (CFRPs) under load are reported to enable the monitoring of strain or/and damage accumulation. Along with the aforementioned techniques Finally, Acoustic Emission (AE) method is well known to provide information about the location and type of damage. Damage accumulation due to cyclic loading imposes differentiation of certain parameters of AE like duration and energy. Within the scope of this study, infrared thermography is employed along with AE and EPCT methods in order to assess the integrity of bonded repair patches on composite substrates and to monitor critical and subcritical damage induced by the mechanical loading. The combined methodologies were effective in identifying damage initiation and propagation of bonded composite repairs.

  1. Surface and Atmospheric Parameter Retrieval From AVIRIS Data: The Importance of Non-Linear Effects

    NASA Technical Reports Server (NTRS)

    Green Robert O.; Moreno, Jose F.

    1996-01-01

    AVIRIS data represent a new and important approach for the retrieval of atmospheric and surface parameters from optical remote sensing data. Not only as a test for future space systems, but also as an operational airborne remote sensing system, the development of algorithms to retrieve information from AVIRIS data is an important step to these new approaches and capabilities. Many things have been learned since AVIRIS became operational, and the successive technical improvements in the hardware and the more sophisticated calibration techniques employed have increased the quality of the data to the point of almost meeting optimum user requirements. However, the potential capabilities of imaging spectrometry over the standard multispectral techniques have still not been fully demonstrated. Reasons for this are the technical difficulties in handling the data, the critical aspect of calibration for advanced retrieval methods, and the lack of proper models with which to invert the measured AVIRIS radiances in all the spectral channels. To achieve the potential of imaging spectrometry, these issues must be addressed. In this paper, an algorithm to retrieve information about both atmospheric and surface parameters from AVIRIS data, by using model inversion techniques, is described. Emphasis is put on the derivation of the model itself as well as proper inversion techniques, robust to noise in the data and an inadequate ability of the model to describe natural variability in the data. The problem of non-linear effects is addressed, as it has been demonstrated to be a major source of error in the numerical values retrieved by more simple, linear-based approaches. Non-linear effects are especially critical for the retrieval of surface parameters where both scattering and absorption effects are coupled, as well as in the cases of significant multiple-scattering contributions. However, sophisticated modeling approaches can handle such non-linear effects, which are especially important over vegetated surfaces. All the data used in this study were acquired during the 1991 Multisensor Airborne Campaign (MAC-Europe), as part of the European Field Experiment on a Desertification-threatened Area (EFEDA), carried out in Spain in June-July 1991.

  2. Loss of stability of a railway wheel-set, subcritical or supercritical

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Dai, Huanyun

    2017-11-01

    Most researches on railway vehicle stability analysis are focused on the codimension 1 (for short, codim 1) bifurcations like subcritical and supercritical Hopf bifurcation. The analysis of codim 1 bifurcation can be completed based on one bifurcation parameter. However, two bifurcation parameters should be considered to give a general view of the motion of the system when it undergoes a degenerate Hopf bifurcation. This kind of bifurcation named the generalised Hopf bifurcation belongs to the codimension 2 (for short, codim 2) bifurcations where two bifurcation parameters need to be taken into consideration. In this paper, we give a numerical analysis of the codim 2 bifurcations of a nonlinear railway wheel-set with the QR algorithm to calculate the eigenvalues of the linearised system incorporating the Golden Cut method and the shooting method to calculate the limit cycles around the Hopf bifurcation points. Here, we found the existence of a generalised Hopf bifurcation where a subcritical Hopf bifurcation turns into a supercritical one with the increase of the bifurcation parameters, which belong to the codim 2 bifurcations, in a nonlinear railway wheel-set model. Only the nonlinear wheel/rail interactive relationship has been taken into consideration in the lateral model that is formulated in this paper. The motion of the wheel-set has been investigated when the bifurcation parameters are perturbed in the neighbourhood of their critical parameters, and the influences of different parameters on critical values of the bifurcation parameters are also given. From the results, it can be seen that the bifurcation types of the wheel-set will change with a variation of the bifurcation parameters in the neighbourhood of their critical values.

  3. Identification of Dust Source Regions at High-Resolution and Dynamics of Dust Source Mask over Southwest United States Using Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Sprigg, W. A.; Sahoo, S.; Prasad, A. K.; Venkatesh, A. S.; Vukovic, A.; Nickovic, S.

    2015-12-01

    Identification and evaluation of sources of aeolian mineral dust is a critical task in the simulation of dust. Recently, time series of space based multi-sensor satellite images have been used to identify and monitor changes in the land surface characteristics. Modeling of windblown dust requires precise delineation of mineral dust source and its strength that varies over a region as well as seasonal and inter-annual variability due to changes in land use and land cover. Southwest USA is one of the major dust emission prone zone in North American continent where dust is generated from low lying dried-up areas with bare ground surface and they may be scattered or appear as point sources on high resolution satellite images. In the current research, various satellite derived variables have been integrated to produce a high-resolution dust source mask, at grid size of 250 m, using data such as digital elevation model, surface reflectance, vegetation cover, land cover class, and surface wetness. Previous dust source models have been adopted to produce a multi-parameter dust source mask using data from satellites such as Terra (Moderate Resolution Imaging Spectroradiometer - MODIS), and Landsat. The dust source mask model captures the topographically low regions with bare soil surface, dried-up river plains, and lakes which form important source of dust in southwest USA. The study region is also one of the hottest regions of USA where surface dryness, land use (agricultural use), and vegetation cover changes significantly leading to major changes in the areal coverage of potential dust source regions. A dynamic high resolution dust source mask have been produced to address intra-annual change in the aerial extent of bare dry surfaces. Time series of satellite derived data have been used to create dynamic dust source masks. A new dust source mask at 16 day interval allows enhanced detection of potential dust source regions that can be employed in the dust emission and transport pathways models for better estimation of emission of dust during dust storms, particulate air pollution, public health risk assessment tools and decision support systems.

  4. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.

  5. Fracture Characterization in Reactive Fluid-Fractured Rock Systems Using Tracer Transport Data

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.

    2014-12-01

    Fractures, whether natural or engineered, exert significant controls over resource exploitation from contemporary energy sources including enhanced geothermal systems and unconventional oil and gas reserves. Consequently, fracture characterization, i.e., estimating the permeability, connectivity, and spacing of the fractures is of critical importance for determining the viability of any energy recovery program. While some progress has recently been made towards estimating these critical fracture parameters, significant uncertainties still remain. A review of tracer technology, which has a long history in fracture characterization, reveals that uncertainties exist in the estimated parameters not only because of paucity of scale-specific data but also because of knowledge gaps in the interpretation methods, particularly in interpretation of tracer data in reactive fluid-rock systems. We have recently demonstrated that the transient tracer evolution signatures in reactive fluid-rock systems are significantly different from those in non-reactive systems (Mukhopadhyay et al., 2013, 2014). For example, the tracer breakthrough curves in reactive fluid-fractured rock systems are expected to exhibit a long pseudo-state condition, during which tracer concentration does not change by any appreciable amount with passage of time. Such a pseudo-steady state condition is not observed in a non-reactive system. In this paper, we show that the presence of this pseudo-steady state condition in tracer breakthrough patterns in reactive fluid-rock systems can have important connotations for fracture characterization. We show that the time of onset of the pseudo-steady state condition and the value of tracer concentration in the pseudo-state condition can be used to reliably estimate fracture spacing and fracture-matrix interface areas.

  6. Assessing critical source areas in watersheds for conservation buffer planning and riparian restoration.

    PubMed

    Qiu, Zeyuan

    2009-11-01

    A science-based geographic information system (GIS) approach is presented to target critical source areas in watersheds for conservation buffer placement. Critical source areas are the intersection of hydrologically sensitive areas and pollutant source areas in watersheds. Hydrologically sensitive areas are areas that actively generate runoff in the watershed and are derived using a modified topographic index approach based on variable source area hydrology. Pollutant source areas are the areas in watersheds that are actively and intensively used for such activities as agricultural production. The method is applied to the Neshanic River watershed in Hunterdon County, New Jersey. The capacity of the topographic index in predicting the spatial pattern of runoff generation and the runoff contribution to stream flow in the watershed is evaluated. A simple cost-effectiveness assessment is conducted to compare the conservation buffer placement scenario based on this GIS method to conventional riparian buffer scenarios for placing conservation buffers in agricultural lands in the watershed. The results show that the topographic index reasonably predicts the runoff generation in the watershed. The GIS-based conservation buffer scenario appears to be more cost-effective than the conventional riparian buffer scenarios.

  7. Towards a critical transition theory under different temporal scales and noise strengths

    NASA Astrophysics Data System (ADS)

    Shi, Jifan; Li, Tiejun; Chen, Luonan

    2016-03-01

    The mechanism of critical phenomena or critical transitions has been recently studied from various aspects, in particular considering slow parameter change and small noise. In this article, we systematically classify critical transitions into three types based on temporal scales and noise strengths of dynamical systems. Specifically, the classification is made by comparing three important time scales τλ, τtran, and τergo, where τλ is the time scale of parameter change (e.g., the change of environment), τtran is the time scale when a particle or state transits from a metastable state into another, and τergo is the time scale when the system becomes ergodic. According to the time scales, we classify the critical transition behaviors as three types, i.e., state transition, basin transition, and distribution transition. Moreover, for each type of transition, there are two cases, i.e., single-trajectory transition and multitrajectory ensemble transition, which correspond to the transition of individual behavior and population behavior, respectively. We also define the critical point for each type of critical transition, derive several properties, and further propose the indicators for predicting critical transitions with numerical simulations. In addition, we show that the noise-to-signal ratio is effective to make the classification of critical transitions for real systems.

  8. A General Approach for Specifying Informative Prior Distributions for PBPK Model Parameters

    EPA Science Inventory

    Characterization of uncertainty in model predictions is receiving more interest as more models are being used in applications that are critical to human health. For models in which parameters reflect biological characteristics, it is often possible to provide estimates of paramet...

  9. Updating national standards for drinking-water: a Philippine experience.

    PubMed

    Lomboy, M; Riego de Dios, J; Magtibay, B; Quizon, R; Molina, V; Fadrilan-Camacho, V; See, J; Enoveso, A; Barbosa, L; Agravante, A

    2017-04-01

    The latest version of the Philippine National Standards for Drinking-Water (PNSDW) was issued in 2007 by the Department of Health (DOH). Due to several issues and concerns, the DOH decided to make an update which is relevant and necessary to meet the needs of the stakeholders. As an output, the water quality parameters are now categorized into mandatory, primary, and secondary. The ten mandatory parameters are core parameters which all water service providers nationwide are obligated to test. These include thermotolerant coliforms or Escherichia coli, arsenic, cadmium, lead, nitrate, color, turbidity, pH, total dissolved solids, and disinfectant residual. The 55 primary parameters are site-specific and can be adopted as enforceable parameters when developing new water sources or when the existing source is at high risk of contamination. The 11 secondary parameters include operational parameters and those that affect the esthetic quality of drinking-water. In addition, the updated PNSDW include new sections: (1) reporting and interpretation of results and corrective actions; (2) emergency drinking-water parameters; (3) proposed Sustainable Development Goal parameters; and (4) standards for other drinking-water sources. The lessons learned and insights gained from the updating of standards are likewise incorporated in this paper.

  10. Sensitivity of a Bayesian atmospheric-transport inversion model to spatio-temporal sensor resolution applied to the 2006 North Korean nuclear test

    NASA Astrophysics Data System (ADS)

    Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.

    2017-12-01

    Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.

  11. Quality-by-design III: application of near-infrared spectroscopy to monitor roller compaction in-process and product quality attributes of immediate release tablets.

    PubMed

    Kona, Ravikanth; Fahmy, Raafat M; Claycamp, Gregg; Polli, James E; Martinez, Marilyn; Hoag, Stephen W

    2015-02-01

    The objective of this study is to use near-infrared spectroscopy (NIRS) coupled with multivariate chemometric models to monitor granule and tablet quality attributes in the formulation development and manufacturing of ciprofloxacin hydrochloride (CIP) immediate release tablets. Critical roller compaction process parameters, compression force (CFt), and formulation variables identified from our earlier studies were evaluated in more detail. Multivariate principal component analysis (PCA) and partial least square (PLS) models were developed during the development stage and used as a control tool to predict the quality of granules and tablets. Validated models were used to monitor and control batches manufactured at different sites to assess their robustness to change. The results showed that roll pressure (RP) and CFt played a critical role in the quality of the granules and the finished product within the range tested. Replacing binder source did not statistically influence the quality attributes of the granules and tablets. However, lubricant type has significantly impacted the granule size. Blend uniformity, crushing force, disintegration time during the manufacturing was predicted using validated PLS regression models with acceptable standard error of prediction (SEP) values, whereas the models resulted in higher SEP for batches obtained from different manufacturing site. From this study, we were able to identify critical factors which could impact the quality attributes of the CIP IR tablets. In summary, we demonstrated the ability of near-infrared spectroscopy coupled with chemometrics as a powerful tool to monitor critical quality attributes (CQA) identified during formulation development.

  12. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  13. Magnetic properties of the rivers feeding the South China Sea: a critical step for understanding the paleo-marine records.

    NASA Astrophysics Data System (ADS)

    Kissel, Catherine; Liu, Zhifei; Wandres, Camille; Liu, Qingsong

    2014-05-01

    In order to use the magnetic properties of marine sediments as a tracer for past changes in the precipitation rate and in oceanic water masses transport and exchanges, it is critical to identify and to characterize the different sources of the detrital fraction among which the magnetic particles. This is of peculiar importance in marginal seas such as the South China Sea extending from about 25°N to the equator. Thanks to the Westpac project, we had access to a number of sediments collected in the deltas of the main rivers feeding the South China Sea. This is represented on the Asian continent by the Pearl river, the Red River, the Mekong river, by Malaysia, Sumatra and Borneo regions with minor rivers but also contributing to the South China Sea, and finally by Luzon and Taiwan. The geological formations contributing to the river sediment discharges are different from one catchment basin to another as well as the present climatic conditions. The magnetic analyses conducted on the samples are the low-field magnetic susceptibility, the ARM acquisition and decay, the IRM acquisition and decay, the back-field acquisition, the thermal demagnetization of 3-axes IRM, the hysteresis parameters, the FORC diagrams. The obtained parameters all together allow us to define the nature of the magnetic grains and their grain size distribution when magnetite is dominant. Some degree of variability is observed at the river mouths, illustrating different geological sources at the local/regional scale. As an average, it appears that the Southern basin of the South China Sea is surrounded by regions richer in high coercivity magnetic minerals than the northern basin. This mineral is identified as hematite while magnetite is more abundant in the north. These results are complementary to the clay mineral assemblages previously determined on the same samples. We'll give some example of how this knowledge allows us to interpret the paleo-marine records from the South China Sea in terms of paleoclimate and paleoceanographic changes. This work is presently conducted in the framework of the Franco-Chinese LIA-MONOCL

  14. Transverse fields to tune an Ising-nematic quantum phase transition [Transverse fields to tune an Ising-nematic quantum critical transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maharaj, Akash V.; Rosenberg, Elliott W.; Hristov, Alexander T.

    Here, the paradigmatic example of a continuous quantum phase transition is the transverse field Ising ferromagnet. In contrast to classical critical systems, whose properties depend only on symmetry and the dimension of space, the nature of a quantum phase transition also depends on the dynamics. In the transverse field Ising model, the order parameter is not conserved, and increasing the transverse field enhances quantum fluctuations until they become strong enough to restore the symmetry of the ground state. Ising pseudospins can represent the order parameter of any system with a twofold degenerate broken-symmetry phase, including electronic nematic order associated withmore » spontaneous point-group symmetry breaking. Here, we show for the representative example of orbital-nematic ordering of a non-Kramers doublet that an orthogonal strain or a perpendicular magnetic field plays the role of the transverse field, thereby providing a practical route for tuning appropriate materials to a quantum critical point. While the transverse fields are conjugate to seemingly unrelated order parameters, their nontrivial commutation relations with the nematic order parameter, which can be represented by a Berry-phase term in an effective field theory, intrinsically intertwine the different order parameters.« less

  15. Transverse fields to tune an Ising-nematic quantum phase transition [Transverse fields to tune an Ising-nematic quantum critical transition

    DOE PAGES

    Maharaj, Akash V.; Rosenberg, Elliott W.; Hristov, Alexander T.; ...

    2017-12-05

    Here, the paradigmatic example of a continuous quantum phase transition is the transverse field Ising ferromagnet. In contrast to classical critical systems, whose properties depend only on symmetry and the dimension of space, the nature of a quantum phase transition also depends on the dynamics. In the transverse field Ising model, the order parameter is not conserved, and increasing the transverse field enhances quantum fluctuations until they become strong enough to restore the symmetry of the ground state. Ising pseudospins can represent the order parameter of any system with a twofold degenerate broken-symmetry phase, including electronic nematic order associated withmore » spontaneous point-group symmetry breaking. Here, we show for the representative example of orbital-nematic ordering of a non-Kramers doublet that an orthogonal strain or a perpendicular magnetic field plays the role of the transverse field, thereby providing a practical route for tuning appropriate materials to a quantum critical point. While the transverse fields are conjugate to seemingly unrelated order parameters, their nontrivial commutation relations with the nematic order parameter, which can be represented by a Berry-phase term in an effective field theory, intrinsically intertwine the different order parameters.« less

  16. Repetitive patterns in rapid optical variations in the nearby black-hole binary V404 Cygni.

    PubMed

    Kimura, Mariko; Isogai, Keisuke; Kato, Taichi; Ueda, Yoshihiro; Nakahira, Satoshi; Shidatsu, Megumi; Enoto, Teruaki; Hori, Takafumi; Nogami, Daisaku; Littlefield, Colin; Ishioka, Ryoko; Chen, Ying-Tung; King, Sun-Kun; Wen, Chih-Yi; Wang, Shiang-Yu; Lehner, Matthew J; Schwamb, Megan E; Wang, Jen-Hung; Zhang, Zhi-Wei; Alcock, Charles; Axelrod, Tim; Bianco, Federica B; Byun, Yong-Ik; Chen, Wen-Ping; Cook, Kem H; Kim, Dae-Won; Lee, Typhoon; Marshall, Stuart L; Pavlenko, Elena P; Antonyuk, Oksana I; Antonyuk, Kirill A; Pit, Nikolai V; Sosnovskij, Aleksei A; Babina, Julia V; Baklanov, Aleksei V; Pozanenko, Alexei S; Mazaeva, Elena D; Schmalz, Sergei E; Reva, Inna V; Belan, Sergei P; Inasaridze, Raguli Ya; Tungalag, Namkhai; Volnova, Alina A; Molotov, Igor E; de Miguel, Enrique; Kasai, Kiyoshi; Stein, William L; Dubovsky, Pavol A; Kiyota, Seiichiro; Miller, Ian; Richmond, Michael; Goff, William; Andreev, Maksim V; Takahashi, Hiromitsu; Kojiguchi, Naoto; Sugiura, Yuki; Takeda, Nao; Yamada, Eiji; Matsumoto, Katsura; James, Nick; Pickard, Roger D; Tordai, Tamás; Maeda, Yutaka; Ruiz, Javier; Miyashita, Atsushi; Cook, Lewis M; Imada, Akira; Uemura, Makoto

    2016-01-07

    How black holes accrete surrounding matter is a fundamental yet unsolved question in astrophysics. It is generally believed that matter is absorbed into black holes via accretion disks, the state of which depends primarily on the mass-accretion rate. When this rate approaches the critical rate (the Eddington limit), thermal instability is supposed to occur in the inner disk, causing repetitive patterns of large-amplitude X-ray variability (oscillations) on timescales of minutes to hours. In fact, such oscillations have been observed only in sources with a high mass-accretion rate, such as GRS 1915+105 (refs 2, 3). These large-amplitude, relatively slow timescale, phenomena are thought to have physical origins distinct from those of X-ray or optical variations with small amplitudes and fast timescales (less than about 10 seconds) often observed in other black-hole binaries-for example, XTE J1118+480 (ref. 4) and GX 339-4 (ref. 5). Here we report an extensive multi-colour optical photometric data set of V404 Cygni, an X-ray transient source containing a black hole of nine solar masses (and a companion star) at a distance of 2.4 kiloparsecs (ref. 8). Our data show that optical oscillations on timescales of 100 seconds to 2.5 hours can occur at mass-accretion rates more than ten times lower than previously thought. This suggests that the accretion rate is not the critical parameter for inducing inner-disk instabilities. Instead, we propose that a long orbital period is a key condition for these large-amplitude oscillations, because the outer part of the large disk in binaries with long orbital periods will have surface densities too low to maintain sustained mass accretion to the inner part of the disk. The lack of sustained accretion--not the actual rate--would then be the critical factor causing large-amplitude oscillations in long-period systems.

  17. Examining How Media Literacy and Personality Factors Predict Skepticism Toward Alcohol Advertising.

    PubMed

    Austin, Erica Weintraub; Muldrow, Adrienne; Austin, Bruce W

    2016-05-01

    To examine the potential effectiveness of media literacy education in the context of well-established personality factors, a survey of 472 young adults, focused on the issue of alcohol marketing messages, examined how individual differences in personality associate with constructs representing aspects of media literacy. The results showed that need for cognition predicted social expectancies and wishful identification with media portrayals in alcohol advertising only through critical thinking about media sources and media content, which are foci of media literacy education. Need for affect did not associate with increased or diminished levels of critical thinking. Critical thinking about sources and messages affected skepticism, represented by expectancies through wishful identification, consistent with the message interpretation process model. The results support the view that critical thinking about media sources is an important precursor to critical thinking about media messages. The results also suggest that critical thinking about media (i.e., media literacy) reflects more than personality characteristics and can affect wishful identification with role models observed in media, which appears to be a key influence on decision making. This adds support to the view that media literacy education can improve decision making across personality types regarding alcohol use by decreasing the potential influence of alcohol marketing messages.

  18. Certification of COTS Software in NASA Human Rated Flight Systems

    NASA Technical Reports Server (NTRS)

    Goforth, Andre

    2012-01-01

    Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as a promising acquisition strategy to improve mission affordability and, yet, has come with significant barriers and challenges. Attempts to integrate COTS software components into NASA human rated flight systems have been, for the most part, complicated by verification and validation (V&V) requirements necessary for flight certification per NASA s own standards. For software that is from COTS sources, and, in general from 3rd party sources, either commercial, government, modified or open source, the expectation is that it meets the same certification criteria as those used for in-house and that it does so as if it were built in-house. The latter is a critical and hidden issue. This paper examines the longstanding barriers and challenges in the use of 3rd party software in safety critical systems and cover recent efforts to use COTS software in NASA s Multi-Purpose Crew Vehicle (MPCV) project. It identifies some core artifacts that without them, the use of COTS and 3rd party software is, for all practical purposes, a nonstarter for affordable and timely insertion into flight critical systems. The paper covers the first use in a flight critical system by NASA of COTS software that has prior FAA certification heritage, which was shown to meet the RTCA-DO-178B standard, and how this certification may, in some cases, be leveraged to allow the use of analysis in lieu of testing. Finally, the paper proposes the establishment of an open source forum for development of safety critical 3rd party software.

  19. Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application

    NASA Astrophysics Data System (ADS)

    Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.

    2017-12-01

    This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and discuss the resolution of interesting and/or important recovered source properties.

  20. Influence of source parameters on the growth of metal nanoparticles by sputter-gas-aggregation

    NASA Astrophysics Data System (ADS)

    Khojasteh, Malak; Kresin, Vitaly V.

    2017-11-01

    We describe the production of size-selected manganese nanoclusters using a magnetron sputtering/aggregation source. Since nanoparticle production is sensitive to a range of overlapping operating parameters (in particular, the sputtering discharge power, the inert gas flow rates, and the aggregation length), we focus on a detailed map of the influence of each parameter on the average nanocluster size. In this way, it is possible to identify the main contribution of each parameter to the physical processes taking place within the source. The discharge power and argon flow supply the metal vapor, and argon also plays a crucial role in the formation of condensation nuclei via three-body collisions. However, the argon flow and the discharge power have a relatively weak effect on the average nanocluster size in the exiting beam. Here the defining role is played by the source residence time, governed by the helium supply (which raises the pressure and density of the gas column inside the source, resulting in more efficient transport of nanoparticles to the exit) and by the aggregation path length.

  1. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  2. Sourcing in the Air Force: An Optimization Approach

    DTIC Science & Technology

    2009-09-01

    quality supplies and services at the lowest cost ( Gabbard , 2004). The commodity sourcing strategy focuses on developing a specific sourcing strategy...Springer Series in Operations Research. New York: Springer-Verlag. Gabbard , E.G. (2004, April). Strategic sourcing: Critical elements and keys to success

  3. HVAC SYSTEMS AS EMISSION SOURCES AFFECTING INDOOR AIR QUALITY: A CRITICAL REVIEW

    EPA Science Inventory

    The study evaluates heating, ventilating, and air-conditioning (HVAC) systems as contaminant emission sources that affect indoor air quality (IAQ). Various literature sources and methods for characterizing HVAC emission sources are reviewed. Available methods include in situ test...

  4. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  5. Ground Truth Events with Source Geometry in Eurasia and the Middle East

    DTIC Science & Technology

    2016-06-02

    source properties, including seismic moment, corner frequency, radiated energy , and stress drop have been obtained using spectra for S waves following...PARAMETERS Other source parameters, including radiated energy , corner frequency, seismic moment, and static stress drop were calculated using a spectral...technique (Richardson & Jordan, 2002; Andrews, 1986). The process entails separating event and station spectra and median- stacking each event’s

  6. Effect of Common Cryoprotectants on Critical Warming Rates and Ice Formation in Aqueous Solutions

    PubMed Central

    Hopkins, Jesse B.; Badeau, Ryan; Warkentin, Matthew; Thorne, Robert E.

    2012-01-01

    Ice formation on warming is of comparable or greater importance to ice formation on cooling in determining survival of cryopreserved samples. Critical warming rates required for ice-free warming of vitrified aqueous solutions of glycerol, dimethyl sulfoxide, ethylene glycol, polyethylene glycol 200 and sucrose have been measured for warming rates of order 10 to 104 K/s. Critical warming rates are typically one to three orders of magnitude larger than critical cooling rates. Warming rates vary strongly with cooling rates, perhaps due to the presence of small ice fractions in nominally vitrified samples. Critical warming and cooling rate data spanning orders of magnitude in rates provide rigorous tests of ice nucleation and growth models and their assumed input parameters. Current models with current best estimates for input parameters provide a reasonable account of critical warming rates for glycerol solutions at high concentrations/low rates, but overestimate both critical warming and cooling rates by orders of magnitude at lower concentrations and larger rates. In vitrification protocols, minimizing concentrations of potentially damaging cryoprotectants while minimizing ice formation will require ultrafast warming rates, as well as fast cooling rates to minimize the required warming rates. PMID:22728046

  7. Critical laboratory values in hemostasis: toward consensus.

    PubMed

    Lippi, Giuseppe; Adcock, Dorothy; Simundic, Ana-Maria; Tripodi, Armando; Favaloro, Emmanuel J

    2017-09-01

    The term "critical values" can be defined to entail laboratory test results that significantly lie outside the normal (reference) range and necessitate immediate reporting to safeguard patient health, as well as those displaying a highly and clinically significant variation compared to previous data. The identification and effective communication of "highly pathological" values has engaged the minds of many clinicians, health care and laboratory professionals for decades, since these activities are vital to good laboratory practice. This is especially true in hemostasis, where a timely and efficient communication of critical values strongly impacts patient management. Due to the heterogeneity of available data, this paper is hence aimed to analyze the state of the art and provide an expert opinion about the parameters, measurement units and alert limits pertaining to critical values in hemostasis, thus providing a basic document for future consultation that assists laboratory professionals and clinicians alike. KEY MESSAGES Critical values are laboratory test results significantly lying outside the normal (reference) range and necessitating immediate reporting to safeguard patient health. A broad heterogeneity exists about critical values in hemostasis worldwide. We provide here an expert opinion about the parameters, measurement units and alert limits pertaining to critical values in hemostasis.

  8. Synaptic Plasticity Enables Adaptive Self-Tuning Critical Networks

    PubMed Central

    Stepp, Nigel; Plenz, Dietmar; Srinivasa, Narayan

    2015-01-01

    During rest, the mammalian cortex displays spontaneous neural activity. Spiking of single neurons during rest has been described as irregular and asynchronous. In contrast, recent in vivo and in vitro population measures of spontaneous activity, using the LFP, EEG, MEG or fMRI suggest that the default state of the cortex is critical, manifested by spontaneous, scale-invariant, cascades of activity known as neuronal avalanches. Criticality keeps a network poised for optimal information processing, but this view seems to be difficult to reconcile with apparently irregular single neuron spiking. Here, we simulate a 10,000 neuron, deterministic, plastic network of spiking neurons. We show that a combination of short- and long-term synaptic plasticity enables these networks to exhibit criticality in the face of intrinsic, i.e. self-sustained, asynchronous spiking. Brief external perturbations lead to adaptive, long-term modification of intrinsic network connectivity through long-term excitatory plasticity, whereas long-term inhibitory plasticity enables rapid self-tuning of the network back to a critical state. The critical state is characterized by a branching parameter oscillating around unity, a critical exponent close to -3/2 and a long tail distribution of a self-similarity parameter between 0.5 and 1. PMID:25590427

  9. Distributed nitrate transport and reaction routines (NTR) inside the mesoscale Hydrological Model (mHM) framework: Development and Application in the Selke catchment

    NASA Astrophysics Data System (ADS)

    Sinha, Sumit; Rode, Michael; Kumar, Rohini; Yang, Xiaoqiang; Samaniego, Luis; Borchardt, Dietrich

    2016-04-01

    Precise measurements of where, when and how much denitrification occurs on the basis of measurements alone persist to be vexing and intractable research problem at all spatial and temporal scales. As a result, models have become essential and vital tools for furthering our current understanding of the processes that control denitrification on catchment scale. Emplacement of Water Framework Directive (WFD) and continued efforts in improving water treatment facilities has resulted in alleviating the problems associated with point sources of pollution. However, the problem of eutrophication still persists and is primarily associated with the diffused sources of pollution originating from agricultural area. In this study, the nitrate transport and reaction (NTR) routines are developed inside the distributed mesoscale Hydrological Model (mHM www.ufz.de/mhm) which is a fully distributed hydrological model with a novel parameter regionalization scheme (Samaniego et al. 2010; Kumar et al. 2013) and has been applied to whole Europe (Rakovec et al. 2016) and numerous catchments worldwide. The aforementioned NTR model is applied to a mesoscale river basin, Selke (463 km2) located in central Germany. The NTR model takes in account the critical and pertinent processes like transformation in vadose zone, atmospheric deposition, plant uptake, instream denitrification and also simulates the process of manure and fertilizer application. Both streamflow routines and the NTR model are run on daily time steps. The split-sample approach was used for model calibration (1994-1999) and validation (2000-2004). Flow dynamics at three gauging stations located inside this catchment are successfully captured by the model with consistently high Nash-Sutcliffe Efficiency (NSE) of at least 0.8. Regarding nitrate estimates, the NSE values are greater than 0.7 for both validation and calibration periods. Finally, the NTR model is used for identifying the critical source areas (CSAs) that contribute significantly to nutrient pollution due to different local hydrological and topographical conditions. Postulations for a comprehensive sensitivity analysis and further regionalization of key parameters of the NTR model are also investigated. References: Kumar, R., L. Samaniego, and S. Attinger (2013a), Implications of distributed hydrologic model parameterization on water fluxes at multiple scales and locations, Water Resour. Res., 49, 360-379, doi:10.1029/2012WR012195. Samaniego, L., R. Kumar, and S. Attinger (2010), Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., 46, W05523, doi:10.1029/2008WR007327. Rakovec, O., Kumar, R., Mai, J., Cuntz, M., Thober, S., Zink, M., Attinger, S., Schäfer, D., Schrön, M., Samaniego, L. (2016): Multiscale and multivariate evaluation of water fluxes and states over European river basins, J. Hydrometeorol., 17, 287-307, doi: 10.1175/JHM-D-15-0054.1.

  10. Using eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements, and PhenoCams to constrain a process-based biogeochemical model for carbon market-funded wetland restoration

    NASA Astrophysics Data System (ADS)

    Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.

    2015-12-01

    We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1% of observed annual budgets of CO2 and CH4, respectively). The use of multiple data streams is critical for constraining parameters and reducing uncertainty in model predictions, thereby providing accurate simulation of greenhouse gas exchange in a wetland restoration project with implications for C market-funded wetland restoration worldwide.

  11. Phase transition in the parametric natural visibility graph.

    PubMed

    Snarskii, A A; Bezsudnov, I V

    2016-10-01

    We investigate time series by mapping them to the complex networks using a parametric natural visibility graph (PNVG) algorithm that generates graphs depending on arbitrary continuous parameter-the angle of view. We study the behavior of the relative number of clusters in PNVG near the critical value of the angle of view. Artificial and experimental time series of different nature are used for numerical PNVG investigations to find critical exponents above and below the critical point as well as the exponent in the finite size scaling regime. Altogether, they allow us to find the critical exponent of the correlation length for PNVG. The set of calculated critical exponents satisfies the basic Widom relation. The PNVG is found to demonstrate scaling behavior. Our results reveal the similarity between the behavior of the relative number of clusters in PNVG and the order parameter in the second-order phase transitions theory. We show that the PNVG is another example of a system (in addition to magnetic, percolation, superconductivity, etc.) with observed second-order phase transition.

  12. Crack problem in superconducting cylinder with exponential distribution of critical-current density

    NASA Astrophysics Data System (ADS)

    Zhao, Yufeng; Xu, Chi; Shi, Liang

    2018-04-01

    The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.

  13. Global Examination of Triggered Tectonic Tremor following the 2017 Mw8.1 Tehuantepec Earthquake in Mexico

    NASA Astrophysics Data System (ADS)

    Chao, K.; Gonzalez-Huizar, H.; Tang, V.; Klaeser, R. D.; Mattia, M.; Van der Lee, S.

    2017-12-01

    Triggered tremor is one type of slow earthquake that activated by teleseismic surfaces waves of large magnitude earthquake. Observations of triggered tremor can help to evaluate the background ambient tremor rate and slow slip events in the surrounding region. The Mw 8.1 Tehuantepec earthquake in Mexico is an ideal tremor-triggering candidate for a global search for triggered tremor. Here, we examine triggered tremor globally following the M8.1 event and model the tremor-triggering potential. We examine 7,000 seismic traces and found a widely spread triggered tremor along the western coast of the North America occur during the surface waves of the Mw 8.1 event. Triggered tremor appeared in the San Jacinto Fault, San Andreas Fault around Parkfield, and Calaveras Fault in California, in Vancouver Island in Cascadia subduction zone, in Queen Charlotte Margin and Eastern Denali Fault in Canada, and in Alaska and Aleutian Arc. In addition, we observe a newly found triggered tremor source in Mt. Etna in Sicily Island, Italy. However, we do not find clear triggered tremor evidences in the tremor active regions in Japan, Taiwan, and in New Zealand. We model tremor-triggering potential at the triggering earthquake source and triggered tremor sources. Our modeling results suggest the source parameters of the M8.1 triggering events and the stress at the triggered fault zone are two critical factors to control tremor-triggering threshold.

  14. Ultrasonic power transfer from a spherical acoustic wave source to a free-free piezoelectric receiver: Modeling and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahab, S.; Gray, M.; Erturk, A., E-mail: alper.erturk@me.gatech.edu

    2015-03-14

    Contactless powering of small electronic components has lately received growing attention for wireless applications in which battery replacement or tethered charging is undesired or simply impossible, and ambient energy harvesting is not a viable solution. As an alternative to well-studied methods of contactless energy transfer, such as the inductive coupling method, the use of ultrasonic waves transmitted and received by piezoelectric devices enables larger power transmission distances, which is critical especially for deep-implanted electronic devices. Moreover, energy transfer by means of acoustic waves is well suited in situations where no electromagnetic fields are allowed. The limited literature of ultrasonic acousticmore » energy transfer is mainly centered on proof-of-concept experiments demonstrating the feasibility of this method, lacking experimentally validated modeling efforts for the resulting multiphysics problem that couples the source and receiver dynamics with domain acoustics. In this work, we present fully coupled analytical, numerical, and experimental multiphysics investigations for ultrasonic acoustic energy transfer from a spherical wave source to a piezoelectric receiver bar that operates in the 33-mode of piezoelectricity. The fluid-loaded piezoelectric receiver under free-free mechanical boundary conditions is shunted to an electrical load for quantifying the electrical power output for a given acoustic source strength of the transmitter. The analytical acoustic-piezoelectric structure interaction modeling framework is validated experimentally, and the effects of system parameters are reported along with optimal electrical loading and frequency conditions of the receiver.« less

  15. Critical radiation fluxes and luminosities of black holes and relativistic stars

    NASA Technical Reports Server (NTRS)

    Lamb, Frederick K.; Miller, M. Coleman

    1995-01-01

    The critial luminosity at which the outward force of radiation balances the inward force of gravity plays an important role in many astrophysical systems. We present expressions for the radiation force on particles with arbitrary cross sections and analyze the radiation field produced by radiating matter, such as a disk, ring, boundary layer, or stellar surface, that rotates slowly around a slowly rotating gravitating mass. We then use these results to investigate the critical radiation flux and, where possible, the critical luminosity of such a system in genral relativity. We demonstrate that if the radiation source is axisymmetric and emission is back-front symmetric with repect to the local direction of motion of the radiating matter, as seen in the comoving frame, then the radial component of the radiation flux and the diagonal components of the radiation stress-energy tensor outside the source are the same, to first order in the rotation rates, as they would be if the radiation source and gravitating mass were not rotating. We argue that the critical radiation flux for matter at rest in the locally nonrotating frame is often satisfactory as an astrophysical benchmark flux and show that if this benchmark is adopted, many of the complications potentially introduced by rotation of the radiation source and the gravitating mass are avoided. We show that if the radiation field in the absence of rotation would be spherically symmetric and the opacity is independent of frequency and direction, one can define a critical luminosity for the system that is independent of frequency and direction, one can define a critical luminosity for the system that is independent of the spectrum and angular size of the radiation source and is unaffected by rotation of the source and mass and orbital motion of the matter, to first order. Finally, we analyze the conditions under which the maximum possible luminosity of a star or black hole powered by steady spherically symmetric radial accretion is the same in general relativity as in the Newtonian limit.

  16. Parameter estimation for compact binary coalescence signals with the first generation gravitational-wave detector network

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2013-09-01

    Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance, that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a “blind injection” where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron-star and black-hole binary parameter space over the component mass range 1M⊙-25M⊙ and the full range of spin parameters. The cases reported in this study provide a snapshot of the status of parameter estimation in preparation for the operation of advanced detectors.

  17. Discrete Event Simulation Modeling and Analysis of Key Leader Engagements

    DTIC Science & Technology

    2012-06-01

    to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing

  18. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  19. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  20. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  1. Evolving Relationship Structures in Multi-sourcing Arrangements: The Case of Mission Critical Outsourcing

    NASA Astrophysics Data System (ADS)

    Heitlager, Ilja; Helms, Remko; Brinkkemper, Sjaak

    Information Technology Outsourcing practice and research mainly considers the outsourcing phenomenon as a generic fulfilment of the IT function by external parties. Inspired by the logic of commodity, core competencies and economies of scale; assets, existing departments and IT functions are transferred to external parties. Although the generic approach might work for desktop outsourcing, where standardisation is the dominant factor, it does not work for the management of mission critical applications. Managing mission critical applications requires a different approach where building relationships is critical. The relationships involve inter and intra organisational parties in a multi-sourcing arrangement, called an IT service chain, consisting of multiple (specialist) parties that have to collaborate closely to deliver high quality services.

  2. Shielding calculation and criticality safety analysis of spent fuel transportation cask in research reactors.

    PubMed

    Mohammadi, A; Hassanzadeh, M; Gharib, M

    2016-02-01

    In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Monitoring of conditions inside gas aggregation cluster source during production of Ti/TiOx nanoparticles

    NASA Astrophysics Data System (ADS)

    Kousal, J.; Kolpaková, A.; Shelemin, A.; Kudrna, P.; Tichý, M.; Kylián, O.; Hanuš, J.; Choukourov, A.; Biederman, H.

    2017-10-01

    Gas aggregation sources are nowadays rather widely used in the research community for producing nanoparticles. However, the direct diagnostics of conditions inside the source are relatively scarce. In this work, we focused on monitoring the plasma parameters and the composition of the gas during the production of the TiOx nanoparticles. We studied the role of oxygen in the aggregation process and the influence of the presence of the particles on the plasma. The construction of the source allowed us to make a 2D map of the plasma parameters inside the source.

  4. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  5. Time evolution and dynamical phase transitions at a critical time in a system of one-dimensional bosons after a quantum quench.

    PubMed

    Mitra, Aditi

    2012-12-28

    A renormalization group approach is used to show that a one-dimensional system of bosons subject to a lattice quench exhibits a finite-time dynamical phase transition where an order parameter within a light cone increases as a nonanalytic function of time after a critical time. Such a transition is also found for a simultaneous lattice and interaction quench where the effective scaling dimension of the lattice becomes time dependent, crucially affecting the time evolution of the system. Explicit results are presented for the time evolution of the boson interaction parameter and the order parameter for the dynamical transition as well as for more general quenches.

  6. Light sources for high-volume manufacturing EUV lithography: technology, performance, and power scaling

    NASA Astrophysics Data System (ADS)

    Fomenkov, Igor; Brandt, David; Ershov, Alex; Schafgans, Alexander; Tao, Yezheng; Vaschenko, Georgiy; Rokitski, Slava; Kats, Michael; Vargas, Michael; Purvis, Michael; Rafac, Rob; La Fontaine, Bruno; De Dea, Silvia; LaForge, Andrew; Stewart, Jayson; Chang, Steven; Graham, Matthew; Riggs, Daniel; Taylor, Ted; Abraham, Mathew; Brown, Daniel

    2017-06-01

    Extreme ultraviolet (EUV) lithography is expected to succeed in 193-nm immersion multi-patterning technology for sub-10-nm critical layer patterning. In order to be successful, EUV lithography has to demonstrate that it can satisfy the industry requirements in the following critical areas: power, dose stability, etendue, spectral content, and lifetime. Currently, development of second-generation laser-produced plasma (LPP) light sources for the ASML's NXE:3300B EUV scanner is complete, and first units are installed and operational at chipmaker customers. We describe different aspects and performance characteristics of the sources, dose stability results, power scaling, and availability data for EUV sources and also report new development results.

  7. A risk-based approach to management of leachables utilizing statistical analysis of extractables.

    PubMed

    Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M

    2015-04-01

    To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle.

  8. Techniques and Protocols for Dispersing Nanoparticle Powders in Aqueous Media-Is there a Rationale for Harmonization?

    PubMed

    Hartmann, Nanna B; Jensen, Keld Alstrup; Baun, Anders; Rasmussen, Kirsten; Rauscher, Hubert; Tantra, Ratna; Cupi, Denisa; Gilliland, Douglas; Pianella, Francesca; Riego Sintes, Juan M

    2015-01-01

    Selecting appropriate ways of bringing engineered nanoparticles (ENP) into aqueous dispersion is a main obstacle for testing, and thus for understanding and evaluating, their potential adverse effects to the environment and human health. Using different methods to prepare (stock) dispersions of the same ENP may be a source of variation in the toxicity measured. Harmonization and standardization of dispersion methods applied in mammalian and ecotoxicity testing are needed to ensure a comparable data quality and to minimize test artifacts produced by modifications of ENP during the dispersion preparation process. Such harmonization and standardization will also enhance comparability among tests, labs, and studies on different types of ENP. The scope of this review was to critically discuss the essential parameters in dispersion protocols for ENP. The parameters are identified from individual scientific studies and from consensus reached in larger scale research projects and international organizations. A step-wise approach is proposed to develop tailored dispersion protocols for ecotoxicological and mammalian toxicological testing of ENP. The recommendations of this analysis may serve as a guide to researchers, companies, and regulators when selecting, developing, and evaluating the appropriateness of dispersion methods applied in mammalian and ecotoxicity testing. However, additional experimentation is needed to further document the protocol parameters and investigate to what extent different stock dispersion methods affect ecotoxicological and mammalian toxicological responses of ENP.

  9. Impedance analysis of GPCR-mediated changes in endothelial barrier function: overview, and fundamental considerations for stable and reproducible measurements

    PubMed Central

    Stolwijk, Judith A.; Matrougui, Khalid; Renken, Christian W.; Trebak, Mohamed

    2014-01-01

    The past 20 years have seen significant growth in using impedance-based assays to understand the molecular underpinning of endothelial and epithelial barrier function in response to physiological agonists, pharmacological and toxicological compounds. Most studies on barrier function use G protein coupled receptor (GPCR) agonists which couple to fast and transient changes in barrier properties. The power of impedance based techniques such as Electric Cell-Substrate Impedance Sensing (ECIS) reside in its ability to detect minute changes in cell layer integrity label-free and in real-time ranging from seconds to days. We provide a comprehensive overview of the biophysical principles, applications and recent developments in impedance-based methodologies. Despite extensive application of impedance analysis in endothelial barrier research little attention has been paid to data analysis and critical experimental variables, which are both essential for signal stability and reproducibility. We describe the rationale behind common ECIS data presentation and interpretation and illustrate practical guidelines to improve signal intensity by adapting technical parameters such as electrode layout, monitoring frequency or parameter (resistance versus impedance magnitude). Moreover, we discuss the impact of experimental parameters, including cell source, liquid handling and agonist preparation on signal intensity and kinetics. Our discussions are supported by experimental data obtained from human microvascular endothelial cells challenged with three GPCR agonists, thrombin, histamine and Sphingosine-1-Phosphate. PMID:25537398

  10. Impedance analysis of GPCR-mediated changes in endothelial barrier function: overview and fundamental considerations for stable and reproducible measurements.

    PubMed

    Stolwijk, Judith A; Matrougui, Khalid; Renken, Christian W; Trebak, Mohamed

    2015-10-01

    The past 20 years has seen significant growth in using impedance-based assays to understand the molecular underpinning of endothelial and epithelial barrier function in response to physiological agonists and pharmacological and toxicological compounds. Most studies on barrier function use G protein-coupled receptor (GPCR) agonists which couple to fast and transient changes in barrier properties. The power of impedance-based techniques such as electric cell-substrate impedance sensing (ECIS) resides in its ability to detect minute changes in cell layer integrity label-free and in real-time ranging from seconds to days. We provide a comprehensive overview of the biophysical principles, applications, and recent developments in impedance-based methodologies. Despite extensive application of impedance analysis in endothelial barrier research, little attention has been paid to data analysis and critical experimental variables, which are both essential for signal stability and reproducibility. We describe the rationale behind common ECIS data presentation and interpretation and illustrate practical guidelines to improve signal intensity by adapting technical parameters such as electrode layout, monitoring frequency, or parameter (resistance versus impedance magnitude). Moreover, we discuss the impact of experimental parameters, including cell source, liquid handling, and agonist preparation on signal intensity and kinetics. Our discussions are supported by experimental data obtained from human microvascular endothelial cells challenged with three GPCR agonists, thrombin, histamine, and sphingosine-1-phosphate.

  11. HVAC SYSTEMS AS EMISSION SOURCES AFFECTING INDOOR AIR QUALITY: A CRITICAL REVIEW

    EPA Science Inventory

    The paper discusses results of an evaluation of literature on heating, ventilating, and air-conditioning (HVAC) systems as contaminant emission sources that affect indoor air quality (IAQ). The various literature sources and methods for characterizing HVAC emission sources are re...

  12. The SEASAT-A synthetic aperture radar design and implementation

    NASA Technical Reports Server (NTRS)

    Jordan, R. L.

    1978-01-01

    The SEASAT-A synthetic aperture imaging radar system is the first imaging radar system intended to be used as a scientific instrument designed for orbital use. The requirement of the radar system is to generate continuous radar imagery with a 100 kilometer swath with 25 meter resolution from an orbital altitude of 800 kilometers. These requirements impose unique system design problems and a description of the implementation is given. The end-to-end system is described, including interactions of the spacecraft, antenna, sensor, telemetry link, recording subsystem, and data processor. Some of the factors leading to the selection of critical system parameters are listed. The expected error sources leading to degradation of image quality are reported as well as estimate given of the expected performance from data obtained during a ground testing of the completed subsystems.

  13. [Analysis of body mass index in different sector workers for over ten years].

    PubMed

    Perbellini, L; Zonzin, C; Baldo, M

    2010-01-01

    A critical review of the literature on obesity and overweight underlines that a low educational level, a low social-economic status, certain working conditions, the lack of physical activity in leisure time, together with the availability of food, are the main factors favouring increased prevalence of obesity. Certain jobs also contribute significantly to this problem. Automation, the use of machines for heavy works and sedentary activities favour body weight increase. Jobs that are a source of stress, such as work with night shift can cause metabolic disorders leading to an increased prevalence of obesity. The main aim of this article is to study the trend of body weight in different working area during ten years, comparing this parameter to different factors such as job, blood pressure, smoke, alcool and health diseases.

  14. Improving volcanic sulfur dioxide cloud dispersal forecasts by progressive assimilation of satellite observations

    NASA Astrophysics Data System (ADS)

    Boichu, Marie; Clarisse, Lieven; Khvorostyanov, Dmitry; Clerbaux, Cathy

    2014-04-01

    Forecasting the dispersal of volcanic clouds during an eruption is of primary importance, especially for ensuring aviation safety. As volcanic emissions are characterized by rapid variations of emission rate and height, the (generally) high level of uncertainty in the emission parameters represents a critical issue that limits the robustness of volcanic cloud dispersal forecasts. An inverse modeling scheme, combining satellite observations of the volcanic cloud with a regional chemistry-transport model, allows reconstructing this source term at high temporal resolution. We demonstrate here how a progressive assimilation of freshly acquired satellite observations, via such an inverse modeling procedure, allows for delivering robust sulfur dioxide (SO2) cloud dispersal forecasts during the eruption. This approach provides a computationally cheap estimate of the expected location and mass loading of volcanic clouds, including the identification of SO2-rich parts.

  15. Missing link in the service profit chain: a meta-analytic review of the antecedents, consequences, and moderators of service climate.

    PubMed

    Hong, Ying; Liao, Hui; Hu, Jia; Jiang, Kaifeng

    2013-03-01

    Service climate captures employees' consensual perceptions of organizations' emphasis on service quality. Although many studies have examined the foundation issues and outcomes of service climate, there is a lack of a comprehensive model explicating the antecedents, outcomes, and moderators of service climate. The current study fills this void in the literature. By conducting a meta-analysis of 58 independent samples (N = 9,363), we found support for service climate as a critical linkage between internal and external service parameters. In addition, we found differential effects of service-oriented versus general human resource practices and leadership on service climate, as well as disparate impacts of service climate contingent on types of service, measures of service climate, and sources of rating. Research and practical implications are discussed.

  16. Tokamak power reactor ignition and time dependent fractional power operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vold, E.L.; Mau, T.K.; Conn, R.W.

    1986-06-01

    A flexible time-dependent and zero-dimensional plasma burn code with radial profiles was developed and employed to study the fractional power operation and the thermal burn control options for an INTOR-sized tokamak reactor. The code includes alpha thermalization and a time-dependent transport loss which can be represented by any one of several currently popular scaling laws for energy confinement time. Ignition parameters were found to vary widely in density-temperature (n-T) space for the range of scaling laws examined. Critical ignition issues were found to include the extent of confinement time degradation by alpha heating, the ratio of ion to electron transportmore » power loss, and effect of auxiliary heating on confinement. Feedback control of the auxiliary power and ion fuel sources are shown to provide thermal stability near the ignition curve.« less

  17. NaCl nucleation from brine in seeded simulations: Sources of uncertainty in rate estimates.

    PubMed

    Zimmermann, Nils E R; Vorselaars, Bart; Espinosa, Jorge R; Quigley, David; Smith, William R; Sanz, Eduardo; Vega, Carlos; Peters, Baron

    2018-06-14

    This work reexamines seeded simulation results for NaCl nucleation from a supersaturated aqueous solution at 298.15 K and 1 bar pressure. We present a linear regression approach for analyzing seeded simulation data that provides both nucleation rates and uncertainty estimates. Our results show that rates obtained from seeded simulations rely critically on a precise driving force for the model system. The driving force vs. solute concentration curve need not exactly reproduce that of the real system, but it should accurately describe the thermodynamic properties of the model system. We also show that rate estimates depend strongly on the nucleus size metric. We show that the rate estimates systematically increase as more stringent local order parameters are used to count members of a cluster and provide tentative suggestions for appropriate clustering criteria.

  18. Biotic responses buffer warming-induced soil organic carbon loss in Arctic tundra.

    PubMed

    Liang, Junyi; Xia, Jiangyang; Shi, Zheng; Jiang, Lifen; Ma, Shuang; Lu, Xingjie; Mauritz, Marguerite; Natali, Susan M; Pegoraro, Elaine; Penton, C Ryan; Plaza, César; Salmon, Verity G; Celis, Gerardo; Cole, James R; Konstantinidis, Konstantinos T; Tiedje, James M; Zhou, Jizhong; Schuur, Edward A G; Luo, Yiqi

    2018-05-26

    Climate warming can result in both abiotic (e.g., permafrost thaw) and biotic (e.g., microbial functional genes) changes in Arctic tundra. Recent research has incorporated dynamic permafrost thaw in Earth system models (ESMs) and indicates that Arctic tundra could be a significant future carbon (C) source due to the enhanced decomposition of thawed deep soil C. However, warming-induced biotic changes may influence biologically related parameters and the consequent projections in ESMs. How model parameters associated with biotic responses will change under warming and to what extent these changes affect projected C budgets have not been carefully examined. In this study, we synthesized six data sets over five years from a soil warming experiment at the Eight Mile Lake, Alaska, into the Terrestrial ECOsystem (TECO) model with a probabilistic inversion approach. The TECO model used multiple soil layers to track dynamics of thawed soil under different treatments. Our results show that warming increased light use efficiency of vegetation photosynthesis but decreased baseline (i.e., environment-corrected) turnover rates of SOC in both the fast and slow pools in comparison with those under control. Moreover, the parameter changes generally amplified over time, suggesting processes of gradual physiological acclimation and functional gene shifts of both plants and microbes. The TECO model predicted that field warming from 2009 to 2013 resulted in cumulative C losses of 224 or 87 g m -2 , respectively, without or with changes in those parameters. Thus, warming-induced parameter changes reduced predicted soil C loss by 61%. Our study suggests that it is critical to incorporate biotic changes in ESMs to improve the model performance in predicting C dynamics in permafrost regions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Historical emissions critical for mapping decarbonization pathways

    NASA Astrophysics Data System (ADS)

    Majkut, J.; Kopp, R. E.; Sarmiento, J. L.; Oppenheimer, M.

    2016-12-01

    Policymakers have set a goal of limiting temperature increase from human influence on the climate. This motivates the identification of decarbonization pathways to stabilize atmospheric concentrations of CO2. In this context, the future behavior of CO2 sources and sinks define the CO2 emissions necessary to meet warming thresholds with specified probabilities. We adopt a simple model of the atmosphere-land-ocean carbon balance to reflect uncertainty in how natural CO2 sinks will respond to increasing atmospheric CO2 and temperature. Bayesian inversion is used to estimate the probability distributions of selected parameters of the carbon model. Prior probability distributions are chosen to reflect the behavior of CMIP5 models. We then update these prior distributions by running historical simulations of the global carbon cycle and inverting with observationally-based inventories and fluxes of anthropogenic carbon in the ocean and atmosphere. The result is a best-estimate of historical CO2 sources and sinks and a model of how CO2 sources and sinks will vary in the future under various emissions scenarios, with uncertainty. By linking the carbon model to a simple climate model, we calculate emissions pathways and carbon budgets consistent with meeting specific temperature thresholds and identify key factors that contribute to remaining uncertainty. In particular, we show how the assumed history of CO2 emissions from land use change (LUC) critically impacts estimates of the strength of the land CO2 sink via CO2 fertilization. Different estimates of historical LUC emissions taken from the literature lead to significantly different parameterizations of the carbon system. High historical CO2 emissions from LUC lead to a more robust CO2 fertilization effect, significantly lower future atmospheric CO2 concentrations, and an increased amount of CO2 that can be emitted to satisfy temperature stabilization targets. Thus, in our model, historical LUC emissions have a significant impact on allowable carbon budgets under temperture targets.

  20. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    PubMed

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

Top