Sample records for relevant source parameters

  1. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  2. Free Electron coherent sources: From microwave to X-rays

    NASA Astrophysics Data System (ADS)

    Dattoli, Giuseppe; Di Palma, Emanuele; Pagnutti, Simonetta; Sabia, Elio

    2018-04-01

    The term Free Electron Laser (FEL) will be used, in this paper, to indicate a wide collection of devices aimed at providing coherent electromagnetic radiation from a beam of "free" electrons, unbound at the atomic or molecular states. This article reviews the similarities that link different sources of coherent radiation across the electromagnetic spectrum from microwaves to X-rays, and compares the analogies with conventional laser sources. We explore developing a point of view that allows a unified analytical treatment of these devices, by the introduction of appropriate global variables (e.g. gain, saturation intensity, inhomogeneous broadening parameters, longitudinal mode coupling strength), yielding a very effective way for the determination of the relevant design parameters. The paper looks also at more speculative aspects of FEL physics, which may address the relevance of quantum effects in the lasing process.

  3. The HelCat dual-source plasma device.

    PubMed

    Lynn, Alan G; Gilmore, Mark; Watts, Christopher; Herrea, Janis; Kelly, Ralph; Will, Steve; Xie, Shuangwei; Yan, Lincan; Zhang, Yue

    2009-10-01

    The HelCat (Helicon-Cathode) device has been constructed to support a broad range of basic plasma science experiments relevant to the areas of solar physics, laboratory astrophysics, plasma nonlinear dynamics, and turbulence. These research topics require a relatively large plasma source capable of operating over a broad region of parameter space with a plasma duration up to at least several milliseconds. To achieve these parameters a novel dual-source system was developed utilizing both helicon and thermionic cathode sources. Plasma parameters of n(e) approximately 0.5-50 x 10(18) m(-3) and T(e) approximately 3-12 eV allow access to a wide range of collisionalities important to the research. The HelCat device and initial characterization of plasma behavior during dual-source operation are described.

  4. Theoretical and experimental studies in ultraviolet solar physics

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Reeves, E. M.

    1975-01-01

    The processes and parameters in atomic and molecular physics that are relevant to solar physics are investigated. The areas covered include: (1) measurement of atomic and molecular parameters that contribute to discrete and continous sources of opacity and abundance determinations in the sun; (2) line broadening and scattering phenomena; and (3) development of an ion beam spectroscopic source which is used for the measurement of electron excitation cross sections of transition region and coronal ions.

  5. Finding Relevant Parameters for the Thin-film Photovoltaic Cells Production Process with the Application of Data Mining Methods.

    PubMed

    Ulaczyk, Jan; Morawiec, Krzysztof; Zabierowski, Paweł; Drobiazg, Tomasz; Barreau, Nicolas

    2017-09-01

    A data mining approach is proposed as a useful tool for the control parameters analysis of the 3-stage CIGSe photovoltaic cell production process, in order to find variables that are the most relevant for cell electric parameters and efficiency. The analysed data set consists of stage duration times, heater power values as well as temperatures for the element sources and the substrate - there are 14 variables per sample in total. The most relevant variables of the process have been found based on the so-called random forest analysis with the application of the Boruta algorithm. 118 CIGSe samples, prepared at Institut des Matériaux Jean Rouxel, were analysed. The results are close to experimental knowledge on the CIGSe cells production process. They bring new evidence to production parameters of new cells and further research. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  7. Methodical principles of recognition different source types in an acoustic-emission testing of metal objects

    NASA Astrophysics Data System (ADS)

    Bobrov, A. L.

    2017-08-01

    This paper presents issues of identification of various AE sources in order to increase the information value of AE method. This task is especially relevant for complex objects, when factors that affect an acoustic path on an object of testing significantly affect parameters of signals recorded by sensor. Correlation criteria, sensitive to type of AE source in metal objects is determined in the article.

  8. Estimating the Relevance of World Disturbances to Explain Savings, Interference and Long-Term Motor Adaptation Effects

    PubMed Central

    Berniker, Max; Kording, Konrad P.

    2011-01-01

    Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters. PMID:21998574

  9. Updating national standards for drinking-water: a Philippine experience.

    PubMed

    Lomboy, M; Riego de Dios, J; Magtibay, B; Quizon, R; Molina, V; Fadrilan-Camacho, V; See, J; Enoveso, A; Barbosa, L; Agravante, A

    2017-04-01

    The latest version of the Philippine National Standards for Drinking-Water (PNSDW) was issued in 2007 by the Department of Health (DOH). Due to several issues and concerns, the DOH decided to make an update which is relevant and necessary to meet the needs of the stakeholders. As an output, the water quality parameters are now categorized into mandatory, primary, and secondary. The ten mandatory parameters are core parameters which all water service providers nationwide are obligated to test. These include thermotolerant coliforms or Escherichia coli, arsenic, cadmium, lead, nitrate, color, turbidity, pH, total dissolved solids, and disinfectant residual. The 55 primary parameters are site-specific and can be adopted as enforceable parameters when developing new water sources or when the existing source is at high risk of contamination. The 11 secondary parameters include operational parameters and those that affect the esthetic quality of drinking-water. In addition, the updated PNSDW include new sections: (1) reporting and interpretation of results and corrective actions; (2) emergency drinking-water parameters; (3) proposed Sustainable Development Goal parameters; and (4) standards for other drinking-water sources. The lessons learned and insights gained from the updating of standards are likewise incorporated in this paper.

  10. Systematic Desensitization in Group Counseling Settings: An Overview

    ERIC Educational Resources Information Center

    Mayton, Daniel M., II; Atkinson, Donald R.

    1974-01-01

    This article summarizes research on systematic desensitization applied in group settings. Guidelines for use of group desensitization by college counselors are developed, and relevant sources regarding procedural variations cited. The need for research on group desensitization parameters is discussed. (Author)

  11. Getting Astrophysical Information from LISA Data

    NASA Technical Reports Server (NTRS)

    Stebbins, R. T.; Bender, P. L.; Folkner, W. M.

    1997-01-01

    Gravitational wave signals from a large number of astrophysical sources will be present in the LISA data. Information about as many sources as possible must be estimated from time series of strain measurements. Several types of signals are expected to be present: simple periodic signals from relatively stable binary systems, chirped signals from coalescing binary systems, complex waveforms from highly relativistic binary systems, stochastic backgrounds from galactic and extragalactic binary systems and possibly stochastic backgrounds from the early Universe. The orbital motion of the LISA antenna will modulate the phase and amplitude of all these signals, except the isotropic backgrounds and thereby give information on the directions of sources. Here we describe a candidate process for disentangling the gravitational wave signals and estimating the relevant astrophysical parameters from one year of LISA data. Nearly all of the sources will be identified by searching with templates based on source parameters and directions.

  12. Secondary School Teachers' Knowledge and Attitudes towards Renewable Energy Sources

    ERIC Educational Resources Information Center

    Liarakou, Georgia; Gavrilakis, Costas; Flouri, Eleni

    2009-01-01

    Investigating knowledge, perceptions as well as attitudes of the public that concern various aspects of environmental issues is of high importance for Environmental Education. An integrated understanding of these parameters can properly support the planning of Environmental Education curriculum and relevant educational materials. In this survey we…

  13. Secondary School Teachers' Knowledge and Attitudes Towards Renewable Energy Sources

    NASA Astrophysics Data System (ADS)

    Liarakou, Georgia; Gavrilakis, Costas; Flouri, Eleni

    2009-04-01

    Investigating knowledge, perceptions as well as attitudes of public that concern various aspects of environmental issues is of high importance for Environmental Education. An integrated understanding of these parameters can properly support the planning of Environmental Education curriculum and relevant educational materials. In this survey we investigated knowledge and attitudes of secondary school teachers in Greece towards renewable energy sources, particularly wind and solar energy systems. A questionnaire with both open and close questions was used as the main methodological instrument. Findings revealed that although teachers were informed about renewable energy sources and well disposed towards these sources, they hardly expressed clear positions regarding several issues about wind and solar energy technologies and farms. Moreover such themes are limited integrated in teaching either as extra curricular educational programs or through the curriculum. These findings cannot confirm that teachers could influence students' opinion towards renewable energy systems. Thus, authorities should invest more in Environmental Education and relevant Teachers' Education.

  14. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  15. Radiation effects on the flow of Powell-Eyring fluid past an unsteady inclined stretching sheet with non-uniform heat source/sink.

    PubMed

    Hayat, Tasawar; Asad, Sadia; Mustafa, Meraj; Alsaedi, Ahmed

    2014-01-01

    This study investigates the unsteady flow of Powell-Eyring fluid past an inclined stretching sheet. Unsteadiness in the flow is due to the time-dependence of the stretching velocity and wall temperature. Mathematical analysis is performed in the presence of thermal radiation and non-uniform heat source/sink. The relevant boundary layer equations are reduced into self-similar forms by suitable transformations. The analytic solutions are constructed in a series form by homotopy analysis method (HAM). The convergence interval of the auxiliary parameter is obtained. Graphical results displaying the influence of interesting parameters are given. Numerical values of skin friction coefficient and local Nusselt number are computed and analyzed.

  16. Organic contaminants in onsite wastewater treatment systems

    USGS Publications Warehouse

    Conn, K.E.; Siegrist, R.L.; Barber, L.B.; Brown, G.K.

    2007-01-01

    Wastewater from thirty onsite wastewater treatment systems was sampled during a reconnaissance field study to quantify bulk parameters and the occurrence of organic wastewater contaminants including endocrine disrupting compounds in treatment systems representing a variety of wastewater sources and treatment processes and their receiving environments. Bulk parameters ranged in concentrations representative of the wide variety of wastewater sources (residential vs. non-residential). Organic contaminants such as sterols, surfactant metabolites, antimicrobial agents, stimulants, metal-chelating agents, and other consumer product chemicals, measured by gas chromatography/mass spectrometry were detected frequently in onsite system wastewater. Wastewater composition was unique between source type likely due to differences in source water and chemical usage. Removal efficiencies varied by engineered treatment type and physicochemical properties of the contaminant, resulting in discharge to the soil treatment unit at ecotoxicologically-relevant concentrations. Organic wastewater contaminants were detected less frequently and at lower concentrations in onsite system receiving environments. Understanding the occurrence and fate of organic wastewater contaminants in onsite wastewater treatment systems will aid in minimizing risk to ecological and human health.

  17. Phase I Contaminant Transport Parameters for the Groundwater Flow and Contaminant Transport Model of Corrective Action Unit 97: Yucca Flat/Climax Mine, Nevada Test Site, Nye County, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John McCord

    2007-09-01

    This report documents transport data and data analyses for Yucca Flat/Climax Mine CAU 97. The purpose of the data compilation and related analyses is to provide the primary reference to support parameterization of the Yucca Flat/Climax Mine CAU transport model. Specific task objectives were as follows: • Identify and compile currently available transport parameter data and supporting information that may be relevant to the Yucca Flat/Climax Mine CAU. • Assess the level of quality of the data and associated documentation. • Analyze the data to derive expected values and estimates of the associated uncertainty and variability. The scope of thismore » document includes the compilation and assessment of data and information relevant to transport parameters for the Yucca Flat/Climax Mine CAU subsurface within the context of unclassified source-term contamination. Data types of interest include mineralogy, aqueous chemistry, matrix and effective porosity, dispersivity, matrix diffusion, matrix and fracture sorption, and colloid-facilitated transport parameters.« less

  18. Scaling of hydrologic and erosion parameters derived from rainfall simulation

    NASA Astrophysics Data System (ADS)

    Sheridan, Gary; Lane, Patrick; Noske, Philip; Sherwin, Christopher

    2010-05-01

    Rainfall simulation experiments conducted at the temporal scale of minutes and the spatial scale of meters are often used to derive parameters for erosion and water quality models that operate at much larger temporal and spatial scales. While such parameterization is convenient, there has been little effort to validate this approach via nested experiments across these scales. In this paper we first review the literature relevant to some of these long acknowledged issues. We then present rainfall simulation and erosion plot data from a range of sources, including mining, roading, and forestry, to explore the issues associated with the scaling of parameters such as infiltration properties and erodibility coefficients.

  19. Dependence of the source performance on plasma parameters at the BATMAN test facility

    NASA Astrophysics Data System (ADS)

    Wimmer, C.; Fantz, U.

    2015-04-01

    The investigation of the dependence of the source performance (high jH-, low je) for optimum Cs conditions on the plasma parameters at the BATMAN (Bavarian Test MAchine for Negative hydrogen ions) test facility is desirable in order to find key parameters for the operation of the source as well as to deepen the physical understanding. The most relevant source physics takes place in the extended boundary layer, which is the plasma layer with a thickness of several cm in front of the plasma grid: the production of H-, its transport through the plasma and its extraction, inevitably accompanied by the co-extraction of electrons. Hence, a link of the source performance with the plasma parameters in the extended boundary layer is expected. In order to characterize electron and negative hydrogen ion fluxes in the extended boundary layer, Cavity Ring-Down Spectroscopy and Langmuir probes have been applied for the measurement of the H- density and the determination of the plasma density, the plasma potential and the electron temperature, respectively. The plasma potential is of particular importance as it determines the sheath potential profile at the plasma grid: depending on the plasma grid bias relative to the plasma potential, a transition in the plasma sheath from an electron repelling to an electron attracting sheath takes place, influencing strongly the electron fraction of the bias current and thus the amount of co-extracted electrons. Dependencies of the source performance on the determined plasma parameters are presented for the comparison of two source pressures (0.6 Pa, 0.45 Pa) in hydrogen operation. The higher source pressure of 0.6 Pa is a standard point of operation at BATMAN with external magnets, whereas the lower pressure of 0.45 Pa is closer to the ITER requirements (p ≤ 0.3 Pa).

  20. Binaural room simulation

    NASA Technical Reports Server (NTRS)

    Lehnert, H.; Blauert, Jens; Pompetzki, W.

    1991-01-01

    In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.

  1. Traceable measurements of the electrical parameters of solid-state lighting products

    NASA Astrophysics Data System (ADS)

    Zhao, D.; Rietveld, G.; Braun, J.-P.; Overney, F.; Lippert, T.; Christensen, A.

    2016-12-01

    In order to perform traceable measurements of the electrical parameters of solid-state lighting (SSL) products, it is necessary to technically adequately define the measurement procedures and to identify the relevant uncertainty sources. The present published written standard for SSL products specifies test conditions, but it lacks an explanation of how adequate these test conditions are. More specifically, both an identification of uncertainty sources and a quantitative uncertainty analysis are absent. This paper fills the related gap in the present written standard. New uncertainty sources with respect to conventional lighting sources are determined and their effects are quantified. It shows that for power measurements, the main uncertainty sources are temperature deviation, power supply voltage distortion, and instability of the SSL product. For current RMS measurements, the influence of bandwidth, shunt resistor, power supply source impedance and ac frequency flatness are significant as well. The measurement uncertainty depends not only on the test equipment but is also a function of the characteristics of the device under test (DUT), for example, current harmonics spectrum and input impedance. Therefore, an online calculation tool is provided to help non-electrical experts. Following our procedures, unrealistic uncertainty estimations, unnecessary procedures and expensive equipment can be prevented.

  2. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  3. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  4. Appropriate evidence sources for populating decision analytic models within health technology assessment (HTA): a systematic review of HTA manuals and health economic guidelines.

    PubMed

    Zechmeister-Koss, Ingrid; Schnell-Inderst, Petra; Zauner, Günther

    2014-04-01

    An increasing number of evidence sources are relevant for populating decision analytic models. What is needed is detailed methodological advice on which type of data is to be used for what type of model parameter. We aim to identify standards in health technology assessment manuals and economic (modeling) guidelines on appropriate evidence sources and on the role different types of data play within a model. Documents were identified via a call among members of the International Network of Agencies for Health Technology Assessment and by hand search. We included documents from Europe, the United States, Canada, Australia, and New Zealand as well as transnational guidelines written in English or German. We systematically summarized in a narrative manner information on appropriate evidence sources for model parameters, their advantages and limitations, data identification methods, and data quality issues. A large variety of evidence sources for populating models are mentioned in the 28 documents included. They comprise research- and non-research-based sources. Valid and less appropriate sources are identified for informing different types of model parameters, such as clinical effect size, natural history of disease, resource use, unit costs, and health state utility values. Guidelines do not provide structured and detailed advice on this issue. The article does not include information from guidelines in languages other than English or German, and the information is not tailored to specific modeling techniques. The usability of guidelines and manuals for modeling could be improved by addressing the issue of evidence sources in a more structured and comprehensive format.

  5. Flow Glottogram Characteristics and Perceived Degree of Phonatory Pressedness.

    PubMed

    Millgård, Moa; Fors, Tobias; Sundberg, Johan

    2016-05-01

    Phonatory pressedness is a clinically relevant aspect of voice, which generally is analyzed by auditory perception. The present investigation aimed at identifying voice source and formant characteristics related to experts' ratings of phonatory pressedness. Experimental study of the relations between visual analog scale ratings of phonatory pressedness and voice source parameters in healthy voices. Audio, electroglottogram, and subglottal pressure, estimated from oral pressure during /p/ occlusion, were recorded from five female and six male subjects, each of whom deliberately varied phonation type between neutral, flow, and pressed in the syllable /pae/, produced at three loudness levels and three pitches. Speech-language pathologists rated, along a visual analog scale, the degree of perceived phonatory pressedness in these samples. The samples were analyzed by means of inverse filtering with regard to closed quotient, dominance of the voice source fundamental, normalized amplitude quotient, peak-to-peak flow amplitude, as well as formant frequencies and the alpha ratio of spectrum energy above and below 1000 Hz. The results were compared with the rating data, which showed that the ratings were closely related to voice source parameters. Approximately, 70% of the variance of the ratings could be explained by the voice source parameters. A multiple linear regression analysis suggested that perceived phonatory pressedness is related most closely to subglottal pressure, closed quotient, and the two lowest formants. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions

    NASA Astrophysics Data System (ADS)

    Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.

    2017-12-01

    Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.

  7. Simulation of multi-element multispectral UV radiation source for optical-electronic system of minerals luminescence analysis

    NASA Astrophysics Data System (ADS)

    Peretyagin, Vladimir S.; Korolev, Timofey K.; Chertov, Aleksandr N.

    2017-02-01

    The problems of dressability the solid minerals are attracted attention of specialists, where the extraction of mineral raw materials is a significant sector of the economy. There are a significant amount of mineral ore dressability methods. At the moment the radiometric dressability methods are considered the most promising. One of radiometric methods is method photoluminescence. This method is based on the spectral analysis, amplitude and kinetic parameters luminescence of minerals (under UV radiation), as well as color parameters of radiation. The absence of developed scientific and methodological approaches of analysis irradiation area to UV radiation as well as absence the relevant radiation sources are the factors which hinder development and use of photoluminescence method. The present work is devoted to the development of multi-element UV radiation source designed for the solution problem of analysis and sorting minerals by their selective luminescence. This article is presented a method of theoretical modeling of the radiation devices based on UV LEDs. The models consider such factors as spectral component, the spatial and energy parameters of the LEDs. Also, this article is presented the results of experimental studies of the some samples minerals.

  8. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants

    PubMed Central

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-01-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323

  9. Gravitational wave as probe of superfluid dark matter

    NASA Astrophysics Data System (ADS)

    Cai, Rong-Gen; Liu, Tong-Bo; Wang, Shao-Jiang

    2018-02-01

    In recent years, superfluid dark matter (SfDM) has become a competitive model of emergent modified Newtonian dynamics (MOND) scenario: MOND phenomenons naturally emerge as a derived concept due to an extra force mediated between baryons by phonons as a result of axionlike particles condensed as superfluid at galactic scales; Beyond galactic scales, these axionlike particles behave as normal fluid without phonon-mediated MOND-like force between baryons, therefore SfDM also maintains the usual success of Λ CDM at cosmological scales. In this paper, we use gravitational waves (GWs) to probe the relevant parameter space of SfDM. GWs through Bose-Einstein condensate (BEC) could propagate with a speed slightly deviation from the speed-of-light due to the change in the effective refractive index, which depends on the SfDM parameters and GW-source properties. We find that Five hundred meter Aperture Spherical Telescope (FAST), Square Kilometre Array (SKA) and International Pulsar Timing Array (IPTA) are the most promising means as GW probe of relevant parameter space of SfDM. Future space-based GW detectors are also capable of probing SfDM if a multimessenger approach is adopted.

  10. AWE: Aviation Weather Data Visualization

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.

    2001-01-01

    The two official sources for aviation weather reports both require the pilot to mentally visualize the provided information. In contrast, our system, Aviation Weather Environment (AWE) presents aviation specific weather available to pilots in an easy to visualize form. We start with a computer-generated textual briefing for a specific area. We map this briefing onto a grid specific to the pilot's route that includes only information relevant to his flight route that includes only information relevant to his flight as defined by route, altitude, true airspeed, and proposed departure time. By modifying various parameters, the pilot can use AWE as a planning tool as well as a weather briefing tool.

  11. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  12. Economic Insights into Providing Access to Improved Groundwater Sources in Remote, Low-Resource Areas

    NASA Astrophysics Data System (ADS)

    Abramson, A.; Lazarovitch, N.; Adar, E.

    2013-12-01

    Groundwater is often the most or only feasible drinking water source in remote, low-resource areas. Yet the economics of its development have not been systematically outlined. We applied CBARWI (Cost-Benefit Analysis for Remote Water Improvements), a recently developed Decision Support System, to investigate the economic, physical and management factors related to the costs and benefits of non-networked groundwater supply in remote areas. Synthetic profiles of community water services (n = 17,962), defined across 14 parameters' values and ranges relevant to remote areas, were imputed into the decision framework, and the parameter effects on economic outcomes were investigated through regression analysis (Table 1). Several approaches were included for financing the improvements, after Abramson et al, 2011: willingness-to -pay (WTP), -borrow (WTB) and -work (WTW) in community irrigation (';water-for-work'). We found that low-cost groundwater development approaches are almost 7 times more cost-effective than conventional boreholes fitted with handpumps. The costs of electric, submersible borehole pumps are comparable only when providing expanded water supplies, and off-grid communities pay significantly more for such expansions. In our model, new source construction is less cost-effective than improvement of existing wells, but necessary for expanding access to isolated households. The financing approach significantly impacts the feasibility of demand-driven cost recovery; in our investigation, benefit exceeds cost in 16, 32 and 48% of water service configurations financed by WTP, WTB and WTW, respectively. Regressions of total cost (R2 = 0.723) and net benefit under WTW (R2 = 0.829) along with analysis of output distributions indicate that parameters determining the profitability of irrigation are different from those determining costs and other measures of net benefit. These findings suggest that the cost-benefit outcomes associated with groundwater-based water supply improvements vary considerably by many parameters. Thus, a wide variety of factors should be included to inform water development strategies. Abramson, A. et al (2011), Willingness to pay, borrow and work for water service improvements in developing countries, Water Resour Res, 47Table 1: Descriptions, investigated values and regression coefficients of parameters included in our analysis. Rank of standardized β indicates relative importance. Regression dependent variables are in [($ household-1) y-1]. * Parameters relevant to water-for-work program only.† p <.0001‡ p <.05

  13. A deterministic (non-stochastic) low frequency method for geoacoustic inversion.

    PubMed

    Tolstoy, A

    2010-06-01

    It is well known that multiple frequency sources are necessary for accurate geoacoustic inversion. This paper presents an inversion method which uses the low frequency (LF) spectrum only to estimate bottom properties even in the presence of expected errors in source location, phone depths, and ocean sound-speed profiles. Matched field processing (MFP) along a vertical array is used. The LF method first conducts an exhaustive search of the (five) parameter search space (sediment thickness, sound-speed at the top of the sediment layer, the sediment layer sound-speed gradient, the half-space sound-speed, and water depth) at 25 Hz and continues by retaining only the high MFP value parameter combinations. Next, frequency is slowly increased while again retaining only the high value combinations. At each stage of the process, only those parameter combinations which give high MFP values at all previous LF predictions are considered (an ever shrinking set). It is important to note that a complete search of each relevant parameter space seems to be necessary not only at multiple (sequential) frequencies but also at multiple ranges in order to eliminate sidelobes, i.e., false solutions. Even so, there are no mathematical guarantees that one final, unique "solution" will be found.

  14. Filter design for the detection of compact sources based on the Neyman-Pearson detector

    NASA Astrophysics Data System (ADS)

    López-Caniego, M.; Herranz, D.; Barreiro, R. B.; Sanz, J. L.

    2005-05-01

    This paper considers the problem of compact source detection on a Gaussian background. We present a one-dimensional treatment (though a generalization to two or more dimensions is possible). Two relevant aspects of this problem are considered: the design of the detector and the filtering of the data. Our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima. A Neyman-Pearson test is used to define the region of acceptance, which is given by a sufficient linear detector that is independent of the amplitude distribution of the sources. We study how detection can be enhanced by means of linear filters with a scaling parameter, and compare some filters that have been proposed in the literature [the Mexican hat wavelet, the matched filter (MF) and the scale-adaptive filter (SAF)]. We also introduce a new filter, which depends on two free parameters (the biparametric scale-adaptive filter, BSAF). The value of these two parameters can be determined, given the a priori probability density function of the amplitudes of the sources, such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once it has fixed the number density of spurious sources. The new filter includes as particular cases the standard MF and the SAF. As a result of its design, the BSAF outperforms these filters. The combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters (one of them a scaling parameter) improves significantly the number of detections in some interesting cases. In particular, for the case of weak sources embedded in white noise, the improvement with respect to the standard MF is of the order of 40 per cent. Finally, an estimation of the amplitude of the source (most probable value) is introduced and it is proven that such an estimator is unbiased and has maximum efficiency. We perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical results.

  15. A simple system for detection of EEG artifacts in polysomnographic recordings.

    PubMed

    Durka, P J; Klekowicz, H; Blinowska, K J; Szelenberger, W; Niemcewicz, Sz

    2003-04-01

    We present an efficient parametric system for automatic detection of electroencephalogram (EEG) artifacts in polysomnographic recordings. For each of the selected types of artifacts, a relevant parameter was calculated for a given epoch. If any of these parameters exceeded a threshold, the epoch was marked as an artifact. Performance of the system, evaluated on 18 overnight polysomnographic recordings, revealed concordance with decisions of human experts close to the interexpert agreement and the repeatability of expert's decisions, assessed via a double-blind test. Complete software (Matlab source code) for the presented system is freely available from the Internet at http://brain.fuw.edu.pl/artifacts.

  16. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    NASA Astrophysics Data System (ADS)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  17. On the possibility of real time air quality and toxicology assessment using multi-wavelength photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Ajtai, Tibor; Pinter, Mate; Utry, Noemi; Kiss-Albert, Gergely; Palagyi, Andrea; Manczinger, Laszlo; Vagvölgyi, Csaba; Szabo, Gabor; Bozoki, Zoltan

    2016-04-01

    In this study we present results of field measurement campaigns focusing on the in-situ characterization of absorption spectra and the health relevance of light absorbing carbonaceous (LAC) in the ambient. The absorption spectra is measured @ 266, 355, 532 and 1064 nm by our state-of-the-art four-wavelength photoacoustic instrument, while for health relevance the eco- cito and genotoxicity parameters were measured using standardized methodologies. We experimentally demonstrated a correlation between the toxicities and the measured absorption spectra quantified by its wavelength dependency. Based on this correlation, we present novel possibilities on real-time air quality monitoring. LAC is extensively studied not only because of its considerable climate effects but as a serious air pollutant too. Gradually increasing number of studies demonstrated experimentally that the health effect of LAC is more serious than it is expected based on its share in total atmospheric aerosol mass. Furthermore during many local pollution events LAC not only has dominancy but it is close to exclusivity. Altogether due to its climate and health effects many studies and proposed regulations focus on the physical, chemical and toxicological properties of LAC as well as on its source apportionment. Despites of its importance, there is not yet a widely accepted standard methodology for the real-time and selective identification of LAC. There are many different reasons of that: starting from its complex inherent physicochemical features including many unknown constituents, via masking effect of ambient on the inherent physicochemical properties taking place even in case of a short residence, ending with the lack of reliable instrumentation for its health or source relevant parameters. Therefore, the methodology and instrument development for selective and reliable identification of LAC is timely and important issues in climate and air quality researches. Recently, many studies demonstrated correlation between the chemical compositions and the absorption features of LAC which open up novel possibilities in real time source apportionment and in air quality monitoring.

  18. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  19. Self-relevance and wishful thinking: facilitation and distortion in source monitoring.

    PubMed

    Barber, Sarah J; Gordon, Ruthanna; Franklin, Nancy

    2009-06-01

    When making source attributions, people tend to attribute desirable statements to reliable sources and undesirable statements to unreliable sources, a phenomenon known as the wishful thinking effect (Gordon, Franklin, & Beck, 2005). In the present study, we examined the influence of wishful thinking on source monitoring for self-relevant information. On one hand, wishful thinking is expected, because self-relevant desires are presumably strong. However, self-relevance is known to confer a memory advantage and may thus provide protection from desire-based biases. In Experiment 1, source memory for self-relevant information was contrasted against source memory for information relevant to others and for neutral information. Results indicated that self-relevant information was affected by wishful thinking and was remembered more accurately than was other information. Experiment 2 showed that the magnitude of the self-relevant wishful thinking effect did not increase with a delay.

  20. Endogenous Crisis Waves: Stochastic Model with Synchronized Collective Behavior

    NASA Astrophysics Data System (ADS)

    Gualdi, Stanislao; Bouchaud, Jean-Philippe; Cencetti, Giulia; Tarzia, Marco; Zamponi, Francesco

    2015-02-01

    We propose a simple framework to understand commonly observed crisis waves in macroeconomic agent-based models, which is also relevant to a variety of other physical or biological situations where synchronization occurs. We compute exactly the phase diagram of the model and the location of the synchronization transition in parameter space. Many modifications and extensions can be studied, confirming that the synchronization transition is extremely robust against various sources of noise or imperfections.

  1. The Certainty of Uncertainty: Potential Sources of Bias and Imprecision in Disease Ecology Studies.

    PubMed

    Lachish, Shelly; Murray, Kris A

    2018-01-01

    Wildlife diseases have important implications for wildlife and human health, the preservation of biodiversity and the resilience of ecosystems. However, understanding disease dynamics and the impacts of pathogens in wild populations is challenging because these complex systems can rarely, if ever, be observed without error. Uncertainty in disease ecology studies is commonly defined in terms of either heterogeneity in detectability (due to variation in the probability of encountering, capturing, or detecting individuals in their natural habitat) or uncertainty in disease state assignment (due to misclassification errors or incomplete information). In reality, however, uncertainty in disease ecology studies extends beyond these components of observation error and can arise from multiple varied processes, each of which can lead to bias and a lack of precision in parameter estimates. Here, we present an inventory of the sources of potential uncertainty in studies that attempt to quantify disease-relevant parameters from wild populations (e.g., prevalence, incidence, transmission rates, force of infection, risk of infection, persistence times, and disease-induced impacts). We show that uncertainty can arise via processes pertaining to aspects of the disease system, the study design, the methods used to study the system, and the state of knowledge of the system, and that uncertainties generated via one process can propagate through to others because of interactions between the numerous biological, methodological and environmental factors at play. We show that many of these sources of uncertainty may not be immediately apparent to researchers (for example, unidentified crypticity among vectors, hosts or pathogens, a mismatch between the temporal scale of sampling and disease dynamics, demographic or social misclassification), and thus have received comparatively little consideration in the literature to date. Finally, we discuss the type of bias or imprecision introduced by these varied sources of uncertainty and briefly present appropriate sampling and analytical methods to account for, or minimise, their influence on estimates of disease-relevant parameters. This review should assist researchers and practitioners to navigate the pitfalls of uncertainty in wildlife disease ecology studies.

  2. Particle model of full-size ITER-relevant negative ion source.

    PubMed

    Taccogna, F; Minelli, P; Ippolito, N

    2016-02-01

    This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.

  3. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    NASA Astrophysics Data System (ADS)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions Euros, shows that geological and geophysical investigations necessary to assess a reliable deterministic hazard evaluation are largely justified.

  4. Blue enhanced light sources: opportunities and risks

    NASA Astrophysics Data System (ADS)

    Lang, Dieter

    2012-03-01

    Natural daylight is characterized by high proportions of blue light. By proof of a third type of photoreceptor in the human eye which is only sensitive in this spectral region and by subsequent studies it has become obvious that these blue proportions are essential for human health and well being. In various studies beneficial effects of indoor lighting with higher blue spectral proportions have been proven. On the other hand with increasing use of light sources having enhanced blue light for indoor illumination questions are arising about potential health risks attributed to blue light. Especially LED are showing distinct emission characteristics in the blue. Recently the French agency for food, environmental and occupational health & safety ANSES have raised the question on health issues related to LED light sources and have claimed to avoid use of LED for lighting in schools. In this paper parameters which are relevant for potential health risks will be shown and their contribution to risk factors will quantitatively be discussed. It will be shown how to differentiate between photometric parameters for assessment of beneficial as well as hazardous effects. Guidelines will be discussed how blue enhanced light sources can be used in applications to optimally support human health and well being and simultaneously avoid any risks attributed to blue light by a proper design of lighting parameters. In the conclusion it will be shown that no inherent health risks are related to LED lighting with a proper lighting design.

  5. Characteristics of laser-induced plasma as a spectroscopic light emission source

    NASA Astrophysics Data System (ADS)

    Ma, Q. L.; Motto-Ros, V.; Lei, W. Q.; Wang, X. C.; Boueri, M.; Laye, F.; Zeng, C. Q.; Sausy, M.; Wartelle, A.; Bai, X. S.; Zheng, L. J.; Zeng, H. P.; Baudelet, M.; Yu, J.

    2012-05-01

    Laser-induced plasma is today a widespread spectroscopic emission source. It can be easily generated using compact and reliable nanosecond pulsed lasers and finds applications in various domains with laser-induced breakdown spectroscopy (LIBS). It is however such a particular medium which is intrinsically a transient and non-point light emitting source. Its timeand space-resolved diagnostics is therefore crucial for its optimized use. In this paper, we review our work on the investigation of the morphology and the evolution of the plasma. Different time scales relevant for the description of the plasma's kinetics and dynamics are covered by suitable techniques. Our results show detailed evolution and transformation of the plasma with high temporal and spatial resolutions. The effects of the laser parameters as well as the background gas are particularly studied.

  6. Prevalence of bacteria resistant to antibiotics and/or biocides on meat processing plant surfaces throughout meat chain production.

    PubMed

    Lavilla Lerma, Leyre; Benomar, Nabil; Gálvez, Antonio; Abriouel, Hikmate

    2013-02-01

    In order to investigate the prevalence of resistant bacteria to biocides and/or antibiotics throughout meat chain production from sacrifice till end of production line, samples from various surfaces of a goat and lamb slaughterhouse representative of the region were analyzed by the culture dependent approach. Resistant Psychrotrophs (n=255 strains), Pseudomonas sp. (n=166 strains), E. coli (n=23 strains), Staphylococcus sp. (n=17 strains) and LAB (n=82 represented mainly by Lactobacillus sp.) were isolated. Resistant psychrotrophs and pseudomonads (47 and 29%, respectively) to different antimicrobials were frequently detected in almost all areas of meat processing plant regardless the antimicrobial used, although there was a clear shift in the spectrum of other bacterial groups and for this aim such resistance was determined according to several parameters: antimicrobial tested, sampling zone and the bacterial group. Correlation of different parameters was done using a statistical tool "Principal component analysis" (PCA) which determined that quaternary ammonium compounds and hexadecylpyridinium were the most relevant biocides for resistance in Pseudomonas sp., while ciprofloxacin and hexachlorophene were more relevant for psychrotrophs, LAB, and in lesser extent Staphylococcus sp. and Escherichia coli. On the other hand, PCA of sampling zones determined that sacrifice room (SR) and cutting room (CR) considered as main source of antibiotic and/or biocide resistant bacteria showed an opposite behaviour concerning relevance of antimicrobials to determine resistance being hexadecylpyridinium, cetrimide and chlorhexidine the most relevant in CR, while hexachlorophene, oxonia 6P and PHMG the most relevant in SR. In conclusion, rotational use of the relevant biocides as disinfectants in CR and SR is recommended in an environment which is frequently disinfected. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. VizieR Online Data Catalog: GUViCS. Ultraviolet Source Catalogs (Voyer+, 2014)

    NASA Astrophysics Data System (ADS)

    Voyer, E. N.; Boselli, A.; Boissier, S.; Heinis, S.; Cortese, L.; Ferrarese, L.; Cote, P.; Cuillandre, J.-C.; Gwyn, S. D. J.; Peng, E. W.; Zhang, H.; Liu, C.

    2014-07-01

    These catalogs are based on GALEX NUV and FUV source detections in and behind the Virgo Cluster. The detections are split into catalogs of extended sources and point-like sources. The UV Virgo Cluster Extended Source catalog (UV_VES.fit) provides the deepest and most extensive UV photometric data of extended galaxies in Virgo to date. If certain data is not available for a given source then a null value is entered (e.g. -999, -99). UV point-like sources are matched with SDSS, NGVS, and NED and the relevant photometry and further data from these databases/catalogs are provided in this compilation of catalogs. The primary GUViCS UV Virgo Cluster Point-Like Source catalog is UV_VPS.fit. This catalog provides the most useful GALEX pipeline NUV and FUV photometric parameters, and categorizes sources as stars, Virgo members, and background sources, when possible. It also provides identifiers for optical matches in the SDSS and NED, and indicates if a match exists in the NGVS, only if GUViCS-optical matches are one-to-one. NED spectroscopic redshifts are also listed for GUViCS-NED one-to-one matches. If certain data is not available for a given source a null value is entered. Additionally, the catalog is useful for quick access to optical data on one-to-one GUViCS-SDSS matches.The only parameter available in the catalog for UV sources that have multiple SDSS matches is the total number of multiple matches, i.e. SDSSNUMMTCHS. Multiple GUViCS sources matched to the same SDSS source are also flagged given a total number of matches, SDSSNUMMTCHS, of one. All other fields for multiple matches are set to a null value of -99. In order to obtain full optical SDSS data for multiply matched UV sources in both scenarios, the user can cross-correlate the GUViCS ID of the sources of interest with the full GUViCS-SDSS matched catalog in GUV_SDSS.fit. The GUViCS-SDSS matched catalog, GUV_SDSS.fit, provides the most relevant SDSS data on all GUViCS-SDSS matches, including one-to-one matches and multiply matched sources. The catalog gives full SDSS identification information, complete SDSS photometric measurements in multiple aperture types, and complete redshift information (photometric and spectroscopic). It is ideal for large statistical studies of galaxy populations at multiple wavelengths in the background of the Virgo Cluster. The catalog can also be used as a starting point to study and search for previously unknown UV-bright point-like objects within the Virgo Cluster. If certain data is not available for a given source that field is given a null value. (6 data files).

  8. Linking source region and ocean wave parameters with the observed primary microseismic noise

    NASA Astrophysics Data System (ADS)

    Juretzek, C.; Hadziioannou, C.

    2017-12-01

    In previous studies, the contribution of Love waves to the primary microseismic noise field was found to be comparable to those of Rayleigh waves. However, so far only few studies analysed both wave types present in this microseismic noise band, which is known to be generated in shallow water and the theoretical understanding has mainly evolved for Rayleigh waves only. Here, we study the relevance of different source region parameters on the observed primary microseismic noise levels of Love and Rayleigh waves simultaneously. By means of beamforming and correlation of seismic noise amplitudes with ocean wave heights in the period band between 12 and 15 s, we analysed how source areas of both wave types compare with each other around Europe. The generation effectivity in different source regions was compared to ocean wave heights, peak ocean gravity wave propagation direction and bathymetry. Observed Love wave noise amplitudes correlate comparably well with near coastal ocean wave parameters as Rayleigh waves. Some coastal regions serve as especially effective sources for one or the other wave type. These coincide not only with locations of high wave heights but also with complex bathymetry. Further, Rayleigh and Love wave noise amplitudes seem to depend equally on the local ocean wave heights, which is an indication for a coupled variation with swell height during the generation of both wave types. However, the wave-type ratio varies directionally. This observation likely hints towards a spatially varying importance of different source mechanisms or structural influences. Further, the wave-type ratio is modulated depending on peak ocean wave propagation directions which could indicate a variation of different source mechanism strengths but also hints towards an imprint of an effective source radiation pattern. This emphasizes that the inclusion of both wave types may provide more constraints for the understanding of acting generation mechanisms.

  9. Global Precipitation Measurement: Methods, Datasets and Applications

    NASA Technical Reports Server (NTRS)

    Tapiador, Francisco; Turk, Francis J.; Petersen, Walt; Hou, Arthur Y.; Garcia-Ortega, Eduardo; Machado, Luiz, A. T.; Angelis, Carlos F.; Salio, Paola; Kidd, Chris; Huffman, George J.; hide

    2011-01-01

    This paper reviews the many aspects of precipitation measurement that are relevant to providing an accurate global assessment of this important environmental parameter. Methods discussed include ground data, satellite estimates and numerical models. First, the methods for measuring, estimating, and modeling precipitation are discussed. Then, the most relevant datasets gathering precipitation information from those three sources are presented. The third part of the paper illustrates a number of the many applications of those measurements and databases. The aim of the paper is to organize the many links and feedbacks between precipitation measurement, estimation and modeling, indicating the uncertainties and limitations of each technique in order to identify areas requiring further attention, and to show the limits within which datasets can be used.

  10. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  11. Natural convection in symmetrically heated vertical parallel plates with discrete heat sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manca, O.; Nardini, S.; Naso, V.

    Laminar air natural convection in a symmetrically heated vertical channel with uniform flush-mounted discrete heat sources has been experimentally investigated. The effects of heated strips location and of their number are pointed out in terms of the maximum wall temperatures. A flow visualization in the entrance region of the channel was carried out and air temperatures and velocities in two cross sections have been measured. Dimensionless local heat transfer coefficients have been evaluated and monomial correlations among relevant parameters have bee derived in the local Rayleigh number range 10--10{sup 6}. Channel Nusselt number has been correlated in a polynomial formmore » in terms of channel Rayleigh number.« less

  12. An approach to software cost estimation

    NASA Technical Reports Server (NTRS)

    Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.

    1984-01-01

    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.

  13. Investigation of Helicon discharges as RF coupling concept of negative hydrogen ion sources

    NASA Astrophysics Data System (ADS)

    Briefi, S.; Fantz, U.

    2013-02-01

    The ITER reference source for H- and D- requires a high RF input power (up to 90 kW per driver). To reduce the demands on the RF circuit, it is highly desirable to reduce the power consumption while retaining the values of the relevant plasma parameters namely the positive ion density and the atomic hydrogen density. Helicon plasmas are a promising alternative RF coupling concept but they are typically generated in long thin discharge tubes using rare gases and an RF frequency of 13.56 MHz. Hence the applicability to the ITER reference source geometry, frequency and the utilization of hydrogen/deuterium has to be proved. In this paper the strategy of the approach for using Helicon discharges for ITER reference source parameters is introduced and the first promising measurements which were carried out at a small laboratory experiment are presented. With increasing RF power a mode transition to the Helicon regime was observed for argon and argon/hydrogen mixtures. In pure hydrogen/deuterium the mode transition could not yet be achieved as the available RF power is too low. In deuterium a special feature of Helicon discharges, the socalled low field peak, could be observed at a moderate B-field of 3 mT.

  14. Numerical simulations of merging black holes for gravitational-wave astronomy

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey

    2014-03-01

    Gravitational waves from merging binary black holes (BBHs) are among the most promising sources for current and future gravitational-wave detectors. Accurate models of these waves are necessary to maximize the number of detections and our knowledge of the waves' sources; near the time of merger, the waves can only be computed using numerical-relativity simulations. For optimal application to gravitational-wave astronomy, BBH simulations must achieve sufficient accuracy and length, and all relevant regions of the BBH parameter space must be covered. While great progress toward these goals has been made in the almost nine years since BBH simulations became possible, considerable challenges remain. In this talk, I will discuss current efforts to meet these challenges, and I will present recent BBH simulations produced using the Spectral Einstein Code, including a catalog of publicly available gravitational waveforms [black-holes.org/waveforms]. I will also discuss simulations of merging black holes with high mass ratios and with spins nearly as fast as possible, the most challenging regions of the BBH parameter space.

  15. Optical variability of extragalactic objects used to tie the HIPPARCOS reference frame to an extragalactic system using Hubble space telescope observations

    NASA Technical Reports Server (NTRS)

    Bozyan, Elizabeth P.; Hemenway, Paul D.; Argue, A. Noel

    1990-01-01

    Observations of a set of 89 extragalactic objects (EGOs) will be made with the Hubble Space Telescope Fine Guidance Sensors and Planetary Camera in order to link the HIPPARCOS Instrumental System to an extragalactic coordinate system. Most of the sources chosen for observation contain compact radio sources and stellarlike nuclei; 65 percent are optical variables beyond a 0.2 mag limit. To ensure proper exposure times, accurate mean magnitudes are necessary. In many cases, the average magnitudes listed in the literature were not adequate. The literature was searched for all relevant photometric information for the EGOs, and photometric parameters were derived, including mean magnitude, maximum range, and timescale of variability. This paper presents the results of that search and the parameters derived. The results will allow exposure times to be estimated such that an observed magnitude different from the tabular magnitude by 0.5 mag in either direction will not degrade the astrometric centering ability on a Planetary Camera CCD frame.

  16. Laser induced fluorescence of BaS: Sm phosphor and energy level splitting of Sm 3+ ion

    NASA Astrophysics Data System (ADS)

    Thomas, Reethamma; Nampoori, V. P. N.

    1990-03-01

    Fluorescence of BaS: Sm phosphor has been studied using a pulsed Nitrogen laser (337.1 nm) as the excitation source. The spectrum consists of a broad band in the region 540-660nm superposed by the characteristic Sm 3+ lines. Energy level splitting pattern of Sm 3+ due to crystal field effects has been calculated and relevent field parameters are evaluated. Analysis shows that Sm 3+ takes up Ba 2+ substitutional sites.

  17. Performance of photomultiplier tubes and sodium iodide scintillation detector systems

    NASA Technical Reports Server (NTRS)

    Meegan, C. A.

    1981-01-01

    The performance of photomultiplier tubes (PMT's) and scintillation detector systems incorporating 50.8 by 1.27 cm NaI (T l) crystals was investigated to determine the characteristics of the photomultiplier tubes and optimize the detector geometry for the Burst and Transient Source Experiment on the Gamma Ray Observatory. Background information on performance characteristics of PMT's and NaI (T l) detectors is provided, procedures for measurement of relevant parameters are specified, and results of these measurements are presented.

  18. Demonstration of a high repetition rate capillary discharge waveguide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonsalves, A. J., E-mail: ajgonsalves@lbl.gov; Pieronek, C.; Daniels, J.

    2016-01-21

    A hydrogen-filled capillary discharge waveguide operating at kHz repetition rates is presented for parameters relevant to laser plasma acceleration (LPA). The discharge current pulse was optimized for erosion mitigation with laser guiding experiments and MHD simulation. Heat flow simulations and measurements showed modest temperature rise at the capillary wall due to the average heat load at kHz repetition rates with water-cooled capillaries, which is promising for applications of LPAs such as high average power radiation sources.

  19. Prediction of Breakthrough Curves for Conservative and Reactive Transport from the Structural Parameters of Highly Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Hansen, S. K.; Haslauer, C. P.; Cirpka, O. A.; Vesselinov, V. V.

    2016-12-01

    It is desirable to predict the shape of breakthrough curves downgradient of a solute source from subsurface structural parameters (as in the small-perturbation macrodispersion theory) both for realistically heterogeneous fields, and at early time, before any sort of Fickian model is applicable. Using a combination of a priori knowledge, large-scale Monte Carlo simulation, and regression techniques, we have developed closed-form predictive expressions for pre- and post-Fickian flux-weighted solute breakthrough curves as a function of distance from the source (in integral scales) and variance of the log hydraulic conductivity field. Using the ensemble of Monte Carlo realizations, we have simultaneously computed error envelopes for the estimated flux-weighted breakthrough, and for the divergence of point breakthrough curves from the flux-weighted average, as functions of the predictive parameters. We have also obtained implied late-time macrodispersion coefficients for highly heterogeneous environments from the breakthrough statistics. This analysis is relevant for the modelling of reactive as well as conservative transport, since for many kinetic sorption and decay reactions, Laplace-domain modification of the breakthrough curve for conservative solute produces the correct curve for the reactive system.

  20. Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook

    NASA Astrophysics Data System (ADS)

    Sobolev, Andrej M.; Gray, Malcolm D.

    2012-07-01

    Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.

  1. Analysis of airborne particulate matter pollution in Timisoara city urban area and correlations between measurements and meteorological data

    NASA Astrophysics Data System (ADS)

    Lungu, Mihai; Lungu, Antoanetta; Stefu, Nicoleta; Neculae, Adrian; Strambeanu, Nicolae

    2017-01-01

    Air pollution is known to have many adverse effects, among which those on human health are considered the most important. Healthy people of all ages can be adversely affected by high levels of air pollutants. Nanoparticles can be considered among the most harmful of all pollutants as they can penetrate straight into the lungs and blood stream. Their role in the aging process has also recently been revealed. In Romania, practically in all important urban areas (Bucureşti, Iaşi, Timişoara, Braşov, Baia Mare, etc.) the daily limit values for airborne particulate matter are exceeded, so more efforts in controlling air quality are required, along with more research and policies with positive impact on reducing the pollutants concentration in air. The approaches that have been developed to assess the air quality and health impacts of pollution sources are based on analytical methods such as source apportionment, factor analyses, and the measurement of source-relevant indicator compounds. The goal of the present study is to offer preliminary but relevant information on the particulate matter distribution in the city of Timisoara, Romania. Measurements of inhalable coarse and fine particles in two areas of the city, the most affected by industrial particulate emissions, were performed in days with various meteorological conditions. Meteorological parameters for the specific measurement days were recorded (wind speed and direction, humidity, temperature, pressure, etc.) and the influence of these parameters on the particulate matter dispersion was studied. The results show that the meteorological conditions cause differences between airborne particulate matter distributions in different days in the same zones. Measurements were made in northern and southern areas of the city of Timisoara because previous results have shown high levels of airborne particulate matter in these areas.

  2. Acknowledging patient heterogeneity in economic evaluation : a systematic literature review.

    PubMed

    Grutters, Janneke P C; Sculpher, Mark; Briggs, Andrew H; Severens, Johan L; Candel, Math J; Stahl, James E; De Ruysscher, Dirk; Boer, Albert; Ramaekers, Bram L T; Joore, Manuela A

    2013-02-01

    Patient heterogeneity is the part of variability that can be explained by certain patient characteristics (e.g. age, disease stage). Population reimbursement decisions that acknowledge patient heterogeneity could potentially save money and increase population health. To date, however, economic evaluations pay only limited attention to patient heterogeneity. The objective of the present paper is to provide a comprehensive overview of the current knowledge regarding patient heterogeneity within economic evaluation of healthcare programmes. A systematic literature review was performed to identify methodological papers on the topic of patient heterogeneity in economic evaluation. Data were obtained using a keyword search of the PubMed database and manual searches. Handbooks were also included. Relevant data were extracted regarding potential sources of patient heterogeneity, in which of the input parameters of an economic evaluation these occur, methods to acknowledge patient heterogeneity and specific concerns associated with this acknowledgement. A total of 20 articles and five handbooks were included. The relevant sources of patient heterogeneity (demographics, preferences and clinical characteristics) and the input parameters where they occurred (baseline risk, treatment effect, health state utility and resource utilization) were combined in a framework. Methods were derived for the design, analysis and presentation phases of an economic evaluation. Concerns related mainly to the danger of false-positive results and equity issues. By systematically reviewing current knowledge regarding patient heterogeneity within economic evaluations of healthcare programmes, we provide guidance for future economic evaluations. Guidance is provided on which sources of patient heterogeneity to consider, how to acknowledge them in economic evaluation and potential concerns. The improved acknowledgement of patient heterogeneity in future economic evaluations may well improve the efficiency of healthcare.

  3. Investigations on Cs-free alternatives for negative ion formation in a low pressure hydrogen discharge at ion source relevant parameters

    NASA Astrophysics Data System (ADS)

    Kurutz, U.; Friedl, R.; Fantz, U.

    2017-07-01

    Caesium (Cs) is applied in high power negative hydrogen ion sources to reduce a converter surface’s work function and thus enabling an efficient negative ion surface formation. Inherent drawbacks with the usage of this reactive alkali metal motivate the search for Cs-free alternative materials for neutral beam injection systems in fusion research. In view of a future DEMOnstration power plant, a suitable material should provide a high negative ion formation efficiency and comply with the RAMI issues of the system: reliability, availability, maintainability, inspectability. Promising candidates, like low work function materials (molybdenum doped with lanthanum (MoLa) and LaB6), as well as different non-doped and boron-doped diamond samples were investigated in this context at identical and ion source relevant parameters at the laboratory experiment HOMER. Negative ion densities were measured above the samples by means of laser photodetachment and compared with two reference cases: pure negative ion volume formation with negative ion densities of about 1× {10}15 {{{m}}}-3 and the effect of H- surface production using an in situ caesiated stainless steel sample which yields 2.5 times higher densities. Compared to pure volume production, none of the diamond samples did exhibit a measurable increase in H- densities, while showing clear indications of plasma-induced erosion. In contrast, both MoLa and LaB6 produced systematically higher densities (MoLa: ×1.60 LaB6: ×1.43). The difference to caesiation can be attributed to the higher work functions of MoLa and LaB6 which are expected to be about 3 eV for both compared to 2.1 eV of a caesiated surface.

  4. Problems encountered with the use of simulation in an attempt to enhance interpretation of a secondary data source in epidemiologic mental health research

    PubMed Central

    2010-01-01

    Background The longitudinal epidemiology of major depressive episodes (MDE) is poorly characterized in most countries. Some potentially relevant data sources may be underutilized because they are not conducive to estimating the most salient epidemiologic parameters. An available data source in Canada provides estimates that are potentially valuable, but that are difficult to apply in clinical or public health practice. For example, weeks depressed in the past year is assessed in this data source whereas episode duration would be of more interest. The goal of this project was to derive, using simulation, more readily interpretable parameter values from the available data. Findings The data source was a Canadian longitudinal study called the National Population Health Survey (NPHS). A simulation model representing the course of depressive episodes was used to reshape estimates deriving from binary and ordinal logistic models (fit to the NPHS data) into equations more capable of informing clinical and public health decisions. Discrete event simulation was used for this purpose. Whereas the intention was to clarify a complex epidemiology, the models themselves needed to become excessively complex in order to provide an accurate description of the data. Conclusions Simulation methods are useful in circumstances where a representation of a real-world system has practical value. In this particular scenario, the usefulness of simulation was limited both by problems with the data source and by inherent complexity of the underlying epidemiology. PMID:20796271

  5. UV fatigue investigations with non-destructive tools in silica

    NASA Astrophysics Data System (ADS)

    Natoli, Jean-Yves; Beaudier, Alexandre; Wagner, Frank R.

    2017-08-01

    A fatigue effect is often observed under multiple laser irradiations, overall in UV. This decrease of LIDT, is a critical parameter for laser sources with high repetition rates and with a need of long-term life, as in spatial applications at 355nm. A challenge is also to replace excimer lasers by solid laser sources, this challenge requires to improve drastically the lifetime of optical materials at 266nm. Main applications of these sources are devoted to material surface nanostructuration, spectroscopy and medical surgeries. In this work we focus on the understanding of the laser matter interaction at 266nm in silica in order to predict the lifetime of components and study parameters links to these lifetimes to give keys of improvement for material suppliers. In order to study the mechanism involved in the case of multiple irradiations, an interesting approach is to involve the evolution of fluorescence, in order to observe the first stages of material changes just before breakdown. We will show that it is sometime possible to estimate the lifetime of component only with the fluorescence measurement, saving time and materials. Moreover, the data from the diagnostics give relevant informations to highlight "defects" induced by multiple laser irradiations.

  6. Turbulent mass inhomogeneities induced by a point-source

    NASA Astrophysics Data System (ADS)

    Thalabard, Simon

    2018-03-01

    We describe how turbulence distributes tracers away from a localized source of injection, and analyze how the spatial inhomogeneities of the concentration field depend on the amount of randomness in the injection mechanism. For that purpose, we contrast the mass correlations induced by purely random injections with those induced by continuous injections in the environment. Using the Kraichnan model of turbulent advection, whereby the underlying velocity field is assumed to be shortly correlated in time, we explicitly identify scaling regions for the statistics of the mass contained within a shell of radius r and located at a distance ρ away from the source. The two key parameters are found to be (i) the ratio s 2 between the absolute and the relative timescales of dispersion and (ii) the ratio Λ between the size of the cloud and its distance away from the source. When the injection is random, only the former is relevant, as previously shown by Celani et al (2007 J. Fluid Mech. 583 189–98) in the case of an incompressible fluid. It is argued that the space partition in terms of s 2 and Λ is a robust feature of the injection mechanism itself, which should remain relevant beyond the Kraichnan model. This is for instance the case in a generalized version of the model, where the absolute dispersion is prescribed to be ballistic rather than diffusive.

  7. 3D numerical simulations of negative hydrogen ion extraction using realistic plasma parameters, geometry of the extraction aperture and full 3D magnetic field map

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.

    2014-02-01

    Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.

  8. Influences of rotation and thermophoresis on MHD peristaltic transport of Jeffrey fluid with convective conditions and wall properties

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Rafiq, M.; Ahmad, B.

    2016-07-01

    This article aims to predict the effects of convective condition and particle deposition on peristaltic transport of Jeffrey fluid in a channel. The whole system is in a rotating frame of reference. The walls of channel are taken flexible. The fluid is electrically conducting in the presence of uniform magnetic field. Non-uniform heat source/sink parameter is also considered. Mass transfer with chemical reaction is considered. Relevant equations for the problems under consideration are first modeled and then simplified using lubrication approach. Resulting equations for stream function and temperature are solved exactly whereas mass transfer equation is solved numerically. Impacts of various involved parameters appearing in the solutions are carefully analyzed.

  9. On the meniscus formation and the negative hydrogen ion extraction from ITER neutral beam injection relevant ion source

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Fantz, U.; Franzen, P.; Minea, T.

    2014-10-01

    The development of a large area (Asource,ITER = 0.9 × 2 m2) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (Asource,BATMAN ≈ 0.32 × 0.59 m2) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child-Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion-ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated extracted currents, both ions and electrons, agree rather well with the experiment.

  10. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  11. Near real time water quality monitoring of Chivero and Manyame lakes of Zimbabwe

    NASA Astrophysics Data System (ADS)

    Muchini, Ronald; Gumindoga, Webster; Togarepi, Sydney; Pinias Masarira, Tarirai; Dube, Timothy

    2018-05-01

    Zimbabwe's water resources are under pressure from both point and non-point sources of pollution hence the need for regular and synoptic assessment. In-situ and laboratory based methods of water quality monitoring are point based and do not provide a synoptic coverage of the lakes. This paper presents novel methods for retrieving water quality parameters in Chivero and Manyame lakes, Zimbabwe, from remotely sensed imagery. Remotely sensed derived water quality parameters are further validated using in-situ data. It also presents an application for automated retrieval of those parameters developed in VB6, as well as a web portal for disseminating the water quality information to relevant stakeholders. The web portal is developed, using Geoserver, open layers and HTML. Results show the spatial variation of water quality and an automated remote sensing and GIS system with a web front end to disseminate water quality information.

  12. Surface Current Density Mapping for Identification of Gastric Slow Wave Propagation

    PubMed Central

    Bradshaw, L. A.; Cheng, L. K.; Richards, W. O.; Pullan, A. J.

    2009-01-01

    The magnetogastrogram records clinically relevant parameters of the electrical slow wave of the stomach noninvasively. Besides slow wave frequency, gastric slow wave propagation velocity is a potentially useful clinical indicator of the state of health of gastric tissue, but it is a difficult parameter to determine from noninvasive bioelectric or biomagnetic measurements. We present a method for computing the surface current density (SCD) from multichannel magnetogastrogram recordings that allows computation of the propagation velocity of the gastric slow wave. A moving dipole source model with hypothetical as well as realistic biomagnetometer parameters demonstrates that while a relatively sparse array of magnetometer sensors is sufficient to compute a single average propagation velocity, more detailed information about spatial variations in propagation velocity requires higher density magnetometer arrays. Finally, the method is validated with simultaneous MGG and serosal EMG measurements in a porcine subject. PMID:19403355

  13. Cavity-Enhanced Raman Spectroscopy for Food Chain Management

    PubMed Central

    Sandfort, Vincenz; Goldschmidt, Jens; Wöllenstein, Jürgen

    2018-01-01

    Comprehensive food chain management requires the monitoring of many parameters including temperature, humidity, and multiple gases. The latter is highly challenging because no low-cost technology for the simultaneous chemical analysis of multiple gaseous components currently exists. This contribution proposes the use of cavity enhanced Raman spectroscopy to enable online monitoring of all relevant components using a single laser source. A laboratory scale setup is presented and characterized in detail. Power enhancement of the pump light is achieved in an optical resonator with a Finesse exceeding 2500. A simulation for the light scattering behavior shows the influence of polarization on the spatial distribution of the Raman scattered light. The setup is also used to measure three relevant showcase gases to demonstrate the feasibility of the approach, including carbon dioxide, oxygen and ethene. PMID:29495501

  14. Vortices at Microwave Frequencies

    NASA Astrophysics Data System (ADS)

    Silva, Enrico; Pompeo, Nicola; Dobrovolskiy, Oleksandr V.

    2017-11-01

    The behavior of vortices at microwave frequencies is an extremely useful source of information on the microscopic parameters that enter the description of the vortex dynamics. This feature has acquired particular relevance since the discovery of unusual superconductors, such as cuprates. Microwave investigation then extended its field of application to many families of superconductors, including the artificially nanostructured materials. It is then important to understand the basics of the physics of vortices moving at high frequency, as well as to understand what information the experiments can yield (and what they can not). The aim of this brief review is to introduce the readers to some basic aspects of the physics of vortices under a microwave electromagnetic field, and to guide them to an understanding of the experiment, also by means of the illustration of some relevant results.

  15. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  16. Size scaling of negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-04-01

    The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  17. Exploring parameter effects on the economic outcomes of groundwater-based developments in remote, low-resource settings

    NASA Astrophysics Data System (ADS)

    Abramson, Adam; Adar, Eilon; Lazarovitch, Naftali

    2014-06-01

    Groundwater is often the most or only feasible safe drinking water source in remote, low-resource areas, yet the economics of its development have not been systematically outlined. We applied AWARE (Assessing Water Alternatives in Remote Economies), a recently developed Decision Support System, to investigate the costs and benefits of groundwater access and abstraction for non-networked, rural supplies. Synthetic profiles of community water services (n = 17,962), defined across 13 parameters' values and ranges relevant to remote areas, were applied to the decision framework, and the parameter effects on economic outcomes were investigated. Regressions and analysis of output distributions indicate that the most important factors determining the cost of water improvements include the technological approach, the water service target, hydrological parameters, and population density. New source construction is less cost-effective than the use or improvement of existing wells, but necessary for expanding access to isolated households. We also explored three financing approaches - willingness-to-pay, -borrow, and -work - and found that they significantly impact the prospects of achieving demand-driven cost recovery. The net benefit under willingness to work, in which water infrastructure is coupled to community irrigation and cash payments replaced by labor commitments, is impacted most strongly by groundwater yield and managerial factors. These findings suggest that the cost-benefit dynamics of groundwater-based water supply improvements vary considerably by many parameters, and that the relative strengths of different development strategies may be leveraged for achieving optimal outcomes.

  18. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.

  19. Evaluation of power transfer efficiency for a high power inductively coupled radio-frequency hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Jain, P.; Recchia, M.; Cavenago, M.; Fantz, U.; Gaio, E.; Kraus, W.; Maistrello, A.; Veltri, P.

    2018-04-01

    Neutral beam injection (NBI) for plasma heating and current drive is necessary for International Thermonuclear Experimental reactor (ITER) tokamak. Due to its various advantages, a radio frequency (RF) driven plasma source type was selected as a reference ion source for the ITER heating NBI. The ITER relevant RF negative ion sources are inductively coupled (IC) devices whose operational working frequency has been chosen to be 1 MHz and are characterized by high RF power density (˜9.4 W cm-3) and low operational pressure (around 0.3 Pa). The RF field is produced by a coil in a cylindrical chamber leading to a plasma generation followed by its expansion inside the chamber. This paper recalls different concepts based on which a methodology is developed to evaluate the efficiency of the RF power transfer to hydrogen plasma. This efficiency is then analyzed as a function of the working frequency and in dependence of other operating source and plasma parameters. The study is applied to a high power IC RF hydrogen ion source which is similar to one simplified driver of the ELISE source (half the size of the ITER NBI source).

  20. Partial and specific source memory for faces associated to other- and self-relevant negative contexts.

    PubMed

    Bell, Raoul; Giang, Trang; Buchner, Axel

    2012-01-01

    Previous research has shown a source memory advantage for faces presented in negative contexts. As yet it remains unclear whether participants remember the specific type of context in which the faces were presented or whether they can only remember that the face was associated with negative valence. In the present study, participants saw faces together with descriptions of two different types of negative behaviour and neutral behaviour. In Experiment 1, we examined whether the participants were able to discriminate between two types of other-relevant negative context information (cheating and disgusting behaviour) in a source memory test. In Experiment 2, we assessed source memory for other-relevant negative (threatening) context information (other-aggressive behaviour) and self-relevant negative context information (self-aggressive behaviour). A multinomial source memory model was used to separately assess partial source memory for the negative valence of the behaviour and specific source memory for the particular type of negative context the face was associated with. In Experiment 1, source memory was specific for the particular type of negative context presented (i.e., cheating or disgusting behaviour). Experiment 2 showed that source memory for other-relevant negative information was more specific than source memory for self-relevant information. Thus, emotional source memory may vary in specificity depending on the degree to which the negative emotional context is perceived as threatening.

  1. Effective atomic numbers and electron densities of some human tissues and dosimetric materials for mean energies of various radiation sources relevant to radiotherapy and medical applications

    NASA Astrophysics Data System (ADS)

    Kurudirek, Murat

    2014-09-01

    Effective atomic numbers, Zeff, and electron densities, neff, are convenient parameters used to characterise the radiation response of a multi-element material in many technical and medical applications. Accurate values of these physical parameters provide essential data in medical physics. In the present study, the effective atomic numbers and electron densities have been calculated for some human tissues and dosimetric materials such as Adipose Tissue (ICRU-44), Bone Cortical (ICRU-44), Brain Grey/White Matter (ICRU-44), Breast Tissue (ICRU-44), Lung Tissue (ICRU-44), Soft Tissue (ICRU-44), LiF TLD-100H, TLD-100, Water, Borosilicate Glass, PAG (Gel Dosimeter), Fricke (Gel Dosimeter) and OSL (Aluminium Oxide) using mean photon energies, Em, of various radiation sources. The used radiation sources are Pd-103, Tc-99, Ra-226, I-131, Ir-192, Co-60, 30 kVp, 40 kVp, 50 kVp (Intrabeam, Carl Zeiss Meditec) and 6 MV (Mohan-6 MV) sources. The Em values were then used to calculate Zeff and neff of the tissues and dosimetric materials for various radiation sources. Different calculation methods for Zeff such as the direct method, the interpolation method and Auto-Zeff computer program were used and agreements and disagreements between the used methods have been presented and discussed. It has been observed that at higher Em values agreement is quite satisfactory (Dif.<5%) between the adopted methods.

  2. Harmonic cavities and the transverse mode-coupling instability driven by a resistive wall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venturini, M.

    The effect of rf harmonic cavities on the transverse mode-coupling instability (TMCI) is still not very well understood. We offer a fresh perspective on the problem by proposing a new numerical method for mode analysis and investigating a regime of potential interest to the new generation of light sources where resistive wall is the dominant source of transverse impedance. When the harmonic cavities are tuned for maximum flattening of the bunch profile we demonstrate that at vanishing chromaticities the transverse single-bunch motion is unstable at any current, with growth rate that in the relevant range scales as the 6th powermore » of the current. With these assumptions and radiation damping included, we find that for machine parameters typical of 4th-generation light sources the presence of harmonic cavities could reduce the instability current threshold by more than a factor two.« less

  3. Harmonic cavities and the transverse mode-coupling instability driven by a resistive wall

    DOE PAGES

    Venturini, M.

    2018-02-01

    The effect of rf harmonic cavities on the transverse mode-coupling instability (TMCI) is still not very well understood. We offer a fresh perspective on the problem by proposing a new numerical method for mode analysis and investigating a regime of potential interest to the new generation of light sources where resistive wall is the dominant source of transverse impedance. When the harmonic cavities are tuned for maximum flattening of the bunch profile we demonstrate that at vanishing chromaticities the transverse single-bunch motion is unstable at any current, with growth rate that in the relevant range scales as the 6th powermore » of the current. With these assumptions and radiation damping included, we find that for machine parameters typical of 4th-generation light sources the presence of harmonic cavities could reduce the instability current threshold by more than a factor two.« less

  4. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    PubMed

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  5. Harmonic cavities and the transverse mode-coupling instability driven by a resistive wall

    NASA Astrophysics Data System (ADS)

    Venturini, M.

    2018-02-01

    The effect of rf harmonic cavities on the transverse mode-coupling instability (TMCI) is still not very well understood. We offer a fresh perspective on the problem by proposing a new numerical method for mode analysis and investigating a regime of potential interest to the new generation of light sources where resistive wall is the dominant source of transverse impedance. When the harmonic cavities are tuned for maximum flattening of the bunch profile we demonstrate that at vanishing chromaticities the transverse single-bunch motion is unstable at any current, with growth rate that in the relevant range scales as the 6th power of the current. With these assumptions and radiation damping included, we find that for machine parameters typical of 4th-generation light sources the presence of harmonic cavities could reduce the instability current threshold by more than a factor two.

  6. Surgeon Reported Outcome Measure for Spine Trauma: An International Expert Survey Identifying Parameters Relevant for the Outcome of Subaxial Cervical Spine Injuries.

    PubMed

    Sadiqi, Said; Verlaan, Jorrit-Jan; Lehr, A Mechteld; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, S; Schnake, Klaus J; Vaccaro, Alexander R; Oner, F Cumhur

    2016-12-15

    International web-based survey. To identify clinical and radiological parameters that spine surgeons consider most relevant when evaluating clinical and functional outcomes of subaxial cervical spine trauma patients. Although an outcome instrument that reflects the patients' perspective is imperative, there is also a need for a surgeon reported outcome measure to reflect the clinicians' perspective adequately. A cross-sectional online survey was conducted among a selected number of spine surgeons from all five AOSpine International world regions. They were asked to indicate the relevance of a compilation of 21 parameters, both for the short term (3 mo-2 yr) and long term (≥2 yr), on a five-point scale. The responses were analyzed using descriptive statistics, frequency analysis, and Kruskal-Wallis test. Of the 279 AOSpine International and International Spinal Cord Society members who received the survey, 108 (38.7%) participated in the study. Ten parameters were identified as relevant both for short term and long term by at least 70% of the participants. Neurological status, implant failure within 3 months, and patient satisfaction were most relevant. Bony fusion was the only parameter for the long term, whereas five parameters were identified for the short term. The remaining six parameters were not deemed relevant. Minor differences were observed when analyzing the responses according to each world region, or spine surgeons' degree of experience. The perspective of an international sample of highly experienced spine surgeons was explored on the most relevant parameters to evaluate and predict outcomes of subaxial cervical spine trauma patients. These results form the basis for the development of a disease-specific surgeon reported outcome measure, which will be a helpful tool in research and clinical practice. 4.

  7. Spatial and Temporal Evolution of Earthquake Dynamics: Case Study of the Mw 8.3 Illapel Earthquake, Chile

    NASA Astrophysics Data System (ADS)

    Yin, Jiuxun; Denolle, Marine A.; Yao, Huajian

    2018-01-01

    We develop a methodology that combines compressive sensing backprojection (CS-BP) and source spectral analysis of teleseismic P waves to provide metrics relevant to earthquake dynamics of large events. We improve the CS-BP method by an autoadaptive source grid refinement as well as a reference source adjustment technique to gain better spatial and temporal resolution of the locations of the radiated bursts. We also use a two-step source spectral analysis based on (i) simple theoretical Green's functions that include depth phases and water reverberations and on (ii) empirical P wave Green's functions. Furthermore, we propose a source spectrogram methodology that provides the temporal evolution of dynamic parameters such as radiated energy and falloff rates. Bridging backprojection and spectrogram analysis provides a spatial and temporal evolution of these dynamic source parameters. We apply our technique to the recent 2015 Mw 8.3 megathrust Illapel earthquake (Chile). The results from both techniques are consistent and reveal a depth-varying seismic radiation that is also found in other megathrust earthquakes. The low-frequency content of the seismic radiation is located in the shallow part of the megathrust, propagating unilaterally from the hypocenter toward the trench while most of the high-frequency content comes from the downdip part of the fault. Interpretation of multiple rupture stages in the radiation is also supported by the temporal variations of radiated energy and falloff rates. Finally, we discuss the possible mechanisms, either from prestress, fault geometry, and/or frictional properties to explain our observables. Our methodology is an attempt to bridge kinematic observations with earthquake dynamics.

  8. 40 CFR 63.10 - Recordkeeping and reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... relevant records for such source of— (i) The occurrence and duration of each startup or shutdown when the startup or shutdown causes the source to exceed any applicable emission limitation in the relevant... startup or shutdown when the source exceeded applicable emission limitations in a relevant standard and...

  9. 40 CFR 63.10 - Recordkeeping and reporting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... relevant records for such source of— (i) The occurrence and duration of each startup or shutdown when the startup or shutdown causes the source to exceed any applicable emission limitation in the relevant... startup or shutdown when the source exceeded applicable emission limitations in a relevant standard and...

  10. 40 CFR 63.10 - Recordkeeping and reporting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... relevant records for such source of— (i) The occurrence and duration of each startup or shutdown when the startup or shutdown causes the source to exceed any applicable emission limitation in the relevant... startup or shutdown when the source exceeded applicable emission limitations in a relevant standard and...

  11. 40 CFR 63.10 - Recordkeeping and reporting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... relevant records for such source of— (i) The occurrence and duration of each startup or shutdown when the startup or shutdown causes the source to exceed any applicable emission limitation in the relevant... startup or shutdown when the source exceeded applicable emission limitations in a relevant standard and...

  12. THE MAYAK WORKER DOSIMETRY SYSTEM (MWDS-2013) FOR INTERNALLY DEPOSITED PLUTONIUM: AN OVERVIEW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchall, A.; Vostrotin, V.; Puncher, M.

    The Mayak Worker Dosimetry System (MWDS-2013) is a system for interpreting measurement data from Mayak workers from both internal and external sources. This paper is concerned with the calculation of annual organ doses for Mayak workers exposed to plutonium aerosols, where the measurement data consists mainly of activity of plutonium in urine samples. The system utilises the latest biokinetic and dosimetric models, and unlike its predecessors, takes explicit account of uncertainties in both the measurement data and model parameters. The aim of this paper is to describe the complete MWDS-2013 system (including model parameter values and their uncertainties) and themore » methodology used (including all the relevant equations) and the assumptions made. Where necessary, supplementary papers which justify specific assumptions are cited.« less

  13. (2+1)-dimensional stars

    NASA Astrophysics Data System (ADS)

    Lubo, M.; Rooman, M.; Spindel, Ph.

    1999-02-01

    We investigate, in the framework of (2+1)-dimensional gravity, stationary rotationally symmetric gravitational sources of the perfect fluid type, embedded in a space of an arbitrary cosmological constant. We show that the matching conditions between the interior and exterior geometries imply restrictions on the physical parameters of the solutions. In particular, imposing finite sources and the absence of closed timelike curves privileges negative values of the cosmological constant, yielding exterior vacuum geometries of rotating black hole type. In the special case of static sources, we prove the complete integrability of the field equations and show that the sources' masses are bounded from above and, for a vanishing cosmological constant, generally equal to 1. We also discuss and illustrate the stationary configurations by explicitly solving the field equations for constant mass-energy densities. If the pressure vanishes, we recover as interior geometries Gödel-like metrics defined on causally well behaved domains, but with unphysical values of the mass to angular momentum ratio. The introduction of pressure in the sources cures the latter problem and leads to physically more relevant models.

  14. Application of all relevant feature selection for failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.

    2015-07-01

    The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.

  15. An Open-Source Bayesian Atmospheric Radiative Transfer (BART) Code, with Application to WASP-12b

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Rojo, Patricio; Loredo, Thomas J.; Bowman, M. Oliver; Foster, Andrew S. D.; Stemm, Madison M.; Lust, Nate B.

    2015-01-01

    Atmospheric retrievals for solar-system planets typically fit, either with a minimizer or by eye, a synthetic spectrum to high-resolution (Δλ/λ ~ 1000-100,000) data with S/N > 100 per point. In contrast, exoplanet data often have S/N ~ 10 per point, and may have just a few points representing bandpasses larger than 1 um. To derive atmospheric constraints and robust parameter uncertainty estimates from such data requires a Bayesian approach. To date there are few investigators with the relevant codes, none of which are publicly available. We are therefore pleased to announce the open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART uses a Bayesian phase-space explorer to drive a radiative-transfer model through the parameter phase space, producing the most robust estimates available for the thermal profile and chemical abundances in the atmosphere. We present an overview of the code and an initial application to Spitzer eclipse data for WASP-12b. We invite the community to use and improve BART via the open-source development site GitHub.com. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  16. An Open-Source Bayesian Atmospheric Radiative Transfer (BART) Code, and Application to WASP-12b

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Rojo, Patricio M.; Loredo, Thomas J.; Bowman, Matthew O.; Foster, Andrew S.; Stemm, Madison M.; Lust, Nate B.

    2014-11-01

    Atmospheric retrievals for solar-system planets typically fit, either with a minimizer or by eye, a synthetic spectrum to high-resolution (Δλ/λ ~ 1000-100,000) data with S/N > 100 per point. In contrast, exoplanet data often have S/N ~ 10 per point, and may have just a few points representing bandpasses larger than 1 um. To derive atmospheric constraints and robust parameter uncertainty estimates from such data requires a Bayesian approach. To date there are few investigators with the relevant codes, none of which are publicly available. We are therefore pleased to announce the open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART uses a Bayesian phase-space explorer to drive a radiative-transfer model through the parameter phase space, producing the most robust estimates available for the thermal profile and chemical abundances in the atmosphere. We present an overview of the code and an initial application to Spitzer eclipse data for WASP-12b. We invite the community to use and improve BART via the open-source development site GitHub.com. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  17. Experimental Investigation of Unsteady Thrust Augmentation Using a Speaker-Driven Jet

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Wernet, Mark P.; John, Wentworth T.

    2007-01-01

    An experimental investigation is described in which a simple speaker-driven jet was used as a pulsed thrust source (driver) for an ejector configuration. The objectives of the investigation were twofold. The first was to expand the experimental body of evidence showing that an unsteady thrust source, combined with a properly sized ejector generally yields higher thrust augmentation values than a similarly sized, steady driver of equivalent thrust. The second objective was to identify characteristics of the unsteady driver that may be useful for sizing ejectors, and for predicting the thrust augmentation levels that may be achieved. The speaker-driven jet provided a convenient source for the investigation because it is entirely unsteady (i.e., it has no mean velocity component) and because relevant parameters such as frequency, time-averaged thrust, and diameter are easily variable. The experimental setup will be described, as will the two main measurements techniques employed. These are thrust and digital particle imaging velocimetry of the driver. It will be shown that thrust augmentation values as high as 1.8 were obtained, that the diameter of the best ejector scaled with the dimensions of the emitted vortex, and that the so-called formation time serves as a useful dimensionless parameter by which to characterize the jet and predict performance.

  18. Using aerial images for establishing a workflow for the quantification of water management measures

    NASA Astrophysics Data System (ADS)

    Leuschner, Annette; Merz, Christoph; van Gasselt, Stephan; Steidl, Jörg

    2017-04-01

    Quantified landscape characteristics, such as morphology, land use or hydrological conditions, play an important role for hydrological investigations as landscape parameters directly control the overall water balance. A powerful assimilation and geospatial analysis of remote sensing datasets in combination with hydrological modeling allows to quantify landscape parameters and water balances efficiently. This study focuses on the development of a workflow to extract hydrologically relevant data from aerial image datasets and derived products in order to allow an effective parametrization of a hydrological model. Consistent and self-contained data source are indispensable for achieving reasonable modeling results. In order to minimize uncertainties and inconsistencies, input parameters for modeling should be extracted from one remote-sensing dataset mainly if possbile. Here, aerial images have been chosen because of their high spatial and spectral resolution that permits the extraction of various model relevant parameters, like morphology, land-use or artificial drainage-systems. The methodological repertoire to extract environmental parameters range from analyses of digital terrain models, multispectral classification and segmentation of land use distribution maps and mapping of artificial drainage-systems based on spectral and visual inspection. The workflow has been tested for a mesoscale catchment area which forms a characteristic hydrological system of a young moraine landscape located in the state of Brandenburg, Germany. These dataset were used as input-dataset for multi-temporal hydrological modelling of water balances to detect and quantify anthropogenic and meteorological impacts. ArcSWAT, as a GIS-implemented extension and graphical user input interface for the Soil Water Assessment Tool (SWAT) was chosen. The results of this modeling approach provide the basis for anticipating future development of the hydrological system, and regarding system changes for the adaption of water resource management decisions.

  19. Active Galactic Nuclei at All Wavelengths and from All Angles

    NASA Astrophysics Data System (ADS)

    Padovani, Paolo

    2017-11-01

    AGN are quite unique astronomical sources emitting over more than 20 orders of magnitude in frequency, with different electromagnetic bands providing windows on different sub-structures and their physics. They come in a large number of flavors only partially related to intrinsic differences. I highlight here the types of sources selected in different bands, the relevant selection effects and biases, and the underlying physical processes. I then look at the "big picture" by describing the most important parameters one needs to describe the variety of AGN classes and by discussing AGN at all frequencies in terms of their sky surface density. I conclude with a look at the most pressing open issues and the main new facilities, which will flood us with new data to tackle them.

  20. Active Galactic Nuclei at all wavelengths and from all angles

    NASA Astrophysics Data System (ADS)

    Padovani, Paolo

    2017-11-01

    AGN are quite unique astronomical sources emitting over more than twenty orders of magnitude in frequency, with different electromagnetic bands providing windows on different sub-structures and their physics. They come in a large number of flavors only partially related to intrinsic differences. I highlight here the types of sources selected in different bands, the relevant selection effects and biases, and the underlying physical processes. I then look at the ``big picture'' by describing the most important parameters one needs to describe the variety of AGN classes and by discussing AGN at all frequencies in terms of their sky surface density. I conclude with a look at the most pressing open issues and the main new facilities, which will flood us with new data to tackle them.

  1. Measuring X-Ray Polarization in the Presence of Systematic Effects: Known Background

    NASA Technical Reports Server (NTRS)

    Elsner, Ronald F.; O'Dell, Stephen L.; Weisskopf, Martin C.

    2012-01-01

    The prospects for accomplishing x-ray polarization measurements of astronomical sources have grown in recent years, after a hiatus of more than 37 years. Unfortunately, accompanying this long hiatus has been some confusion over the statistical uncertainties associated with x-ray polarization measurements of these sources. We have initiated a program to perform the detailed calculations that will offer insights into the uncertainties associated with x-ray polarization measurements. Here we describe a mathematical formalism for determining the 1- and 2-parameter errors in the magnitude and position angle of x-ray (linear) polarization in the presence of a (polarized or unpolarized) background. We further review relevant statistics including clearly distinguishing between the Minimum Detectable Polarization (MDP) and the accuracy of a polarization measurement.

  2. Impact of the Diamond Light Source on research in Earth and environmental sciences: current work and future perspectives

    PubMed Central

    Burke, Ian T.; Mosselmans, J. Frederick W.; Shaw, Samuel; Peacock, Caroline L.; Benning, Liane G.; Coker, Victoria S.

    2015-01-01

    Diamond Light Source Ltd celebrated its 10th anniversary as a company in December 2012 and has now accepted user experiments for over 5 years. This paper describes the current facilities available at Diamond and future developments that enhance its capacities with respect to the Earth and environmental sciences. A review of relevant research conducted at Diamond thus far is provided. This highlights how synchrotron-based studies have brought about important advances in our understanding of the fundamental parameters controlling highly complex mineral–fluid–microbe interface reactions in the natural environment. This new knowledge not only enhances our understanding of global biogeochemical processes, but also provides the opportunity for interventions to be designed for environmental remediation and beneficial use. PMID:25624516

  3. Laboratory simulation of the interaction between a tethered satellite system and the ionosphere

    NASA Astrophysics Data System (ADS)

    Vannaroni, G.; Giovi, R.; de Venuto, F.

    1992-10-01

    The authors report on the measurements performed in the IFSI/CNR plasma chamber at Frascati related to the laboratory investigation of the interaction between a plasma source and an ambient plasma of ionospheric type. Such an interaction is of relevant interest for the possibility of using electrodynamic tethered satellite systems, orbiting at ionospheric altitude, for generating electric power or propulsion in space. The interaction region was analyzed at various conditions of ambient magnetic field (/0-0.5/ G) and at different polarization levels of the plasma source (/0-40/ V). The plasma measurements were carried out with a diagnostic system using an array of Langmuir probes movable in the chamber so that a map of the plasma parameters could be obtained at the different experimental conditions.

  4. Anthropometric measurements in Iranian men.

    PubMed

    Gharehdaghi, Jaber; Baazm, Maryam; Ghadipasha, Masoud; Solhi, Sadra; Toutounchian, Farhoud

    2018-01-01

    There is inevitable need for data regarding anthropometric measurements of each community's population. These anthropometric data have various applications, including health assessment, industrial designing, plastic & orthopedic surgery, nutritional studies, anatomical studies and forensic medicine investigations. Anthropometric parameters vary from race to race throughout the world, hence providing an anthropometric profile model of residents of different geographic regions seems to be necessary. To our knowledge, there is no report of bone parameters of the Iranian population. The present study was carried out to provide data on anthropomorphic bone parameters of the Iranian population, as a basis for future relevant studies. We calculated most of the known anthropometric parameters including skull, mandible, clavicle, scapula, humerus, radius, ulna, sacrum, hip, femur, tibia and fibula of 225 male corpses during a period of 2 years (2014-2016). Data expression was done as mean ± standard deviation. The results consist the first documented report on anthropometric bone measurement profile of Iranian male population, that can be considered a valuable source of data for future research on Iranian population in this regard. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  6. Multi-response calibration of a conceptual hydrological model in the semiarid catchment of Wadi al Arab, Jordan

    NASA Astrophysics Data System (ADS)

    Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.

    2014-02-01

    A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.

  7. Probing ultra-fast processes with high dynamic range at 4th-generation light sources: Arrival time and intensity binning at unprecedented repetition rates.

    PubMed

    Kovalev, S; Green, B; Golz, T; Maehrlein, S; Stojanovic, N; Fisher, A S; Kampfrath, T; Gensch, M

    2017-03-01

    Understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systems and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.

  8. Application of all-relevant feature selection for the failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, Wiesław; Wrzesien, Mariusz; Niemiec, Rafał; Rudnicki, Witold R.

    2016-03-01

    Climate models are extremely complex pieces of software. They reflect the best knowledge on the physical components of the climate; nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a simulation crashing. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to the simulation crashing and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the data set used in this research using different methodology. We confirm the main conclusion of the original study concerning the suitability of machine learning for the prediction of crashes. We show that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three others are relevant but redundant and two are not relevant at all. We also show that the variance due to the split of data between training and validation sets has a large influence both on the accuracy of predictions and on the relative importance of variables; hence only a cross-validated approach can deliver a robust prediction of performance and relevance of variables.

  9. Tuning of Kalman filter parameters via genetic algorithm for state-of-charge estimation in battery management system.

    PubMed

    Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.

  10. Tuning of Kalman Filter Parameters via Genetic Algorithm for State-of-Charge Estimation in Battery Management System

    PubMed Central

    Ting, T. O.; Lim, Eng Gee

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041

  11. Stochastic background from cosmic (super)strings: Popcorn-like and (Gaussian) continuous regimes

    NASA Astrophysics Data System (ADS)

    Regimbau, Tania; Giampanis, Stefanos; Siemens, Xavier; Mandic, Vuk

    2012-03-01

    In the era of the next generation of gravitational wave experiments a stochastic background from cusps of cosmic (super)strings is expected to be probed and, if not detected, to be significantly constrained. A popcornlike background can be, for part of the parameter space, as pronounced as the (Gaussian) continuous contribution from unresolved sources that overlap in frequency and time. We study both contributions from unresolved cosmic string cusps over a range of frequencies relevant to ground based interferometers, such as the LIGO/Virgo second generation and Einstein Telescope third generation detectors, the space antenna LISA, and pulsar timing arrays. We compute the sensitivity (at the 2σ level) in the parameter space for the LIGO/Virgo second generation detector, the Einstein Telescope detector, LISA, and pulsar timing arrays. We conclude that the popcorn regime is complementary to the continuous background. Its detection could therefore enhance confidence in a stochastic background detection and possibly help determine fundamental string parameters such as the string tension and the reconnection probability.

  12. A kinetic and thermochemical database for organic sulfur and oxygen compounds.

    PubMed

    Class, Caleb A; Aguilera-Iparraguirre, Jorge; Green, William H

    2015-05-28

    Potential energy surfaces and reaction kinetics were calculated for 40 reactions involving sulfur and oxygen. This includes 11 H2O addition, 8 H2S addition, 11 hydrogen abstraction, 7 beta scission, and 3 elementary tautomerization reactions, which are potentially relevant in the combustion and desulfurization of sulfur compounds found in various fuel sources. Geometry optimizations and frequencies were calculated for reactants and transition states using B3LYP/CBSB7, and potential energies were calculated using CBS-QB3 and CCSD(T)-F12a/VTZ-F12. Rate coefficients were calculated using conventional transition state theory, with corrections for internal rotations and tunneling. Additionally, thermochemical parameters were calculated for each of the compounds involved in these reactions. With few exceptions, rate parameters calculated using the two potential energy methods agreed reasonably, with calculated activation energies differing by less than 5 kJ mol(-1). The computed rate coefficients and thermochemical parameters are expected to be useful for kinetic modeling.

  13. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  14. Investigating scintillometer source areas

    NASA Astrophysics Data System (ADS)

    Perelet, A. O.; Ward, H. C.; Pardyjak, E.

    2017-12-01

    Scintillometry is an indirect ground-based method for measuring line-averaged surface heat and moisture fluxes on length scales of 0.5 - 10 km. These length scales are relevant to urban and other complex areas where setting up traditional instrumentation like eddy covariance is logistically difficult. In order to take full advantage of scintillometry, a better understanding of the flux source area is needed. The source area for a scintillometer is typically calculated as a convolution of point sources along the path. A weighting function is then applied along the path to compensate for a total signal contribution that is biased towards the center of the beam path, and decreasing near the beam ends. While this method of calculating the source area provides an estimate of the contribution of the total flux along the beam, there are still questions regarding the physical meaning of the weighted source area. These questions are addressed using data from an idealized experiment near the Salt Lake City International Airport in northern Utah, U.S.A. The site is a flat agricultural area consisting of two different land uses. This simple heterogeneity in the land use facilitates hypothesis testing related to source areas. Measurements were made with a two wavelength scintillometer system spanning 740 m along with three standard open-path infrared gas analyzer-based eddy-covariance stations along the beam path. This configuration allows for direct observations of fluxes along the beam and comparisons to the scintillometer average. The scintillometer system employed measures the refractive index structure parameter of air for two wavelengths of electromagnetic radiation, 880 μm and 1.86 cm to simultaneously estimate path-averaged heat and moisture fluxes, respectively. Meteorological structure parameters (CT2, Cq2, and CTq) as well as surface fluxes are compared for various amounts of source area overlap between eddy covariance and scintillometry. Additionally, surface properties from LANDSAT 7 & 8 are used to help understand source area composition for different times throughout the experiment.

  15. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    PubMed

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  16. A design for a ground-based data management system

    NASA Technical Reports Server (NTRS)

    Lambird, Barbara A.; Lavine, David

    1988-01-01

    An initial design for a ground-based data management system which includes intelligent data abstraction and cataloging is described. The large quantity of data on some current and future NASA missions leads to significant problems in providing scientists with quick access to relevant data. Human screening of data for potential relevance to a particular study is time-consuming and costly. Intelligent databases can provide automatic screening when given relevent scientific parameters and constraints. The data management system would provide, at a minimum, information of availability of the range of data, the type available, specific time periods covered together with data quality information, and related sources of data. The system would inform the user about the primary types of screening, analysis, and methods of presentation available to the user. The system would then aid the user with performing the desired tasks, in such a way that the user need only specify the scientific parameters and objectives, and not worry about specific details for running a particular program. The design contains modules for data abstraction, catalog plan abstraction, a user-friendly interface, and expert systems for data handling, data evaluation, and application analysis. The emphasis is on developing general facilities for data representation, description, analysis, and presentation that will be easily used by scientists directly, thus bypassing the knowledge acquisition bottleneck. Expert system technology is used for many different aspects of the data management system, including the direct user interface, the interface to the data analysis routines, and the analysis of instrument status.

  17. High temperature stability of anatase in titania-alumina semiconductors with enhanced photodegradation of 2, 4-dichlorophenoxyacetic acid.

    PubMed

    López-Granada, G; Barceinas-Sánchez, J D O; López, R; Gómez, R

    2013-12-15

    The incorporation of aluminum acetylacetonate as alumina source during the gelation of titanium alkoxide reduces the nucleation sites for the formation of large rutile crystals on temperatures ranging from 400 to 800°C. As a result, the aggregation of anatase crystals is prevented at high temperature. A relationship among the specific surface area, pore size, energy band gap, crystalline structure and crystallite size as the most relevant parameters are evaluated and discussed. According to the results for the photocatalytic degradation of 2,4-dichlorophenoxyacetic acid, the specific surface area, pore size, Eg band gap are not determinant in the photocatalytic properties. It was found that the anatase crystallite size is the mores important parameter affecting the degradation efficiency. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Relationship between peroxyacetyl nitrate and nitrogen oxides in the clean troposphere

    NASA Technical Reports Server (NTRS)

    Singh, H. B.; Salas, L. J.; Ridley, B. A.; Shetter, J. D.; Donahue, N. M.

    1985-01-01

    The first study is presented in which the mixing ratios of peroxyactyl nitrate (PAN) and nitrogen oxides, as well as those of peroxypropionyl nitrate and O3 and relevant meteorological parameters, were measured concurrently at a location that receives clean, continental air. The results show that, in clean conditions, nitrogen oxides present in the form of PAN can be as much or more abundant than the inorganic form. In addition, PAN can be an important source of peroxyacetyl radicals which may be important to oxidation processes in the gas as well as liquid phases.

  19. Modelling and simulation of heat pipes with TAIThermIR (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Winkelmann, Max E.

    2016-10-01

    Regarding thermal camouflage usually one has to reduce the surface temperature of an object. All vehicles and installations having a combustion engine usually produce a lot of heat with results on hot spots on the surface which are highly conspicuous. Using heat pipes to transfer this heat to another place on the surface more efficiently might be a way to reduce those hotspots and the overall conspicuity. In a first approach, a model for the Software TAIThermIR was developed to test which parameters of the heat pipes are relevant and what effects can be achieved. It will be shown, that the thermal resistivity of contact zones are quite relevant and the thermal coupling of the engine (source of heat) defines if the alteration of the thermal signature is large or not. Furthermore the impact of the use of heat pipes in relation to surface material is discussed. The influence of different weather scenarios on the change of signatures due to the use of heat pipes is of minor relevance and depends on the choice of the surface material. Finally application issues for real systems are discussed.

  20. Thermodynamic Interactions between Polystyrene and Long-Chain Poly(n-Alkyl Acrylates) Derived from Plant Oils.

    PubMed

    Wang, Shu; Robertson, Megan L

    2015-06-10

    Vegetable oils and their fatty acids are promising sources for the derivation of polymers. Long-chain poly(n-alkyl acrylates) and poly(n-alkyl methacrylates) are readily derived from fatty acids through conversion of the carboxylic acid end-group to an acrylate or methacrylate group. The resulting polymers contain long alkyl side-chains with around 10-22 carbon atoms. Regardless of the monomer source, the presence of alkyl side-chains in poly(n-alkyl acrylates) and poly(n-alkyl methacrylates) provides a convenient mechanism for tuning their physical properties. The development of structured multicomponent materials, including block copolymers and blends, containing poly(n-alkyl acrylates) and poly(n-alkyl methacrylates) requires knowledge of the thermodynamic interactions governing their self-assembly, typically described by the Flory-Huggins interaction parameter χ. We have investigated the χ parameter between polystyrene and long-chain poly(n-alkyl acrylate) homopolymers and copolymers: specifically we have included poly(stearyl acrylate), poly(lauryl acrylate), and their random copolymers. Lauryl and stearyl acrylate were chosen as model alkyl acrylates derived from vegetable oils and have alkyl side-chain lengths of 12 and 18 carbon atoms, respectively. Polystyrene is included in this study as a model petroleum-sourced polymer, which has wide applicability in commercially relevant multicomponent polymeric materials. Two independent methods were employed to measure the χ parameter: cloud point measurements on binary blends and characterization of the order-disorder transition of triblock copolymers, which were in relatively good agreement with one another. The χ parameter was found to be independent of the alkyl side-chain length (n) for large values of n (i.e., n > 10). This behavior is in stark contrast to the n-dependence of the χ parameter predicted from solubility parameter theory. Our study complements prior work investigating the interactions between polystyrene and short-chain polyacrylates (n ≤ 10). To our knowledge, this is the first study to explore the thermodynamic interactions between polystyrene and long-chain poly(n-alkyl acrylates) with n > 10. This work lays the groundwork for the development of multicomponent structured systems (i.e., blends and copolymers) in this class of sustainable materials.

  1. Teachers' Source Evaluation Self-Efficacy Predicts Their Use of Relevant Source Features When Evaluating the Trustworthiness of Web Sources on Special Education

    ERIC Educational Resources Information Center

    Andreassen, Rune; Bråten, Ivar

    2013-01-01

    Building on prior research and theory concerning source evaluation and the role of self-efficacy in the context of online learning, this study investigated the relationship between teachers' beliefs about their capability to evaluate the trustworthiness of sources and their reliance on relevant source features when judging the trustworthiness…

  2. Computational analysis of non-Newtonian boundary layer flow of nanofluid past a semi-infinite vertical plate with partial slip

    NASA Astrophysics Data System (ADS)

    Amanulla, C. H.; Nagendra, N.; Suryanarayana Reddy, M.

    2018-03-01

    An analysis of this paper is examined, two-dimensional, laminar with heat and mass transfer of natural convective nanofluid flow past a semi-infinite vertical plate surface with velocity and thermal slip effects are studied theoretically. The coupled governing partial differential equations are transformed to ordinary differential equations by using non-similarity transformations. The obtained ordinary differential equations are solved numerically by a well-known method named as Keller Box Method (KBM). The influences of the emerging parameters i.e. Casson fluid parameter (β), Brownian motion parameter (Nb), thermophoresis parameter (Nt), Buoyancy ratio parameter (N), Lewis number (Le), Prandtl number (Pr), Velocity slip factor (Sf) and Thermal slip factor (ST) on velocity, temperature and nano-particle concentration distributions is illustrated graphically and interpreted at length. The major sources of nanoparticle migration in Nanofluids are Thermophoresis and Brownian motion. A suitable agreement with existing published literature is made and an excellent agreement is observed for the limiting case and also validation of solutions with a Nakamura tridiagonal method has been included. It is observed that nanoparticle concentrations on surface decreases with an increase in slip parameter. The study is relevant to enrobing processes for electric-conductive nano-materials, of potential use in aerospace and other industries.

  3. Observation-based source terms in the third-generation wave model WAVEWATCH

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.

    2015-12-01

    Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.

  4. Characterizing the Performance of the Princeton Advanced Test Stand Ion Source

    NASA Astrophysics Data System (ADS)

    Stepanov, A.; Gilson, E. P.; Grisham, L.; Kaganovich, I.; Davidson, R. C.

    2012-10-01

    The Princeton Advanced Test Stand (PATS) is a compact experimental facility for studying the physics of intense beam-plasma interactions relevant to the Neutralized Drift Compression Experiment - II (NDCX-II). The PATS facility consists of a multicusp RF ion source mounted on a 2 m-long vacuum chamber with numerous ports for diagnostic access. Ar+ beams are extracted from the source plasma with three-electrode (accel-decel) extraction optics. The RF power and extraction voltage (30 - 100 kV) are pulsed to produce 100 μsec duration beams at 0.5 Hz with excellent shot-to-shot repeatability. Diagnostics include Faraday cups, a double-slit emittance scanner, and scintillator imaging. This work reports measurements of beam parameters for a range of beam energies (30 - 50 keV) and currents to characterize the behavior of the ion source and extraction optics. Emittance scanner data is used to calculate the beam trace-space distribution and corresponding transverse emittance. If the plasma density is changing during a beam pulse, time-resolved emittance scanner data has been taken to study the corresponding evolution of the beam trace-space distribution.

  5. Green’s functions for a volume source in an elastic half-space

    PubMed Central

    Zabolotskaya, Evgenia A.; Ilinskii, Yurii A.; Hay, Todd A.; Hamilton, Mark F.

    2012-01-01

    Green’s functions are derived for elastic waves generated by a volume source in a homogeneous isotropic half-space. The context is sources at shallow burial depths, for which surface (Rayleigh) and bulk waves, both longitudinal and transverse, can be generated with comparable magnitudes. Two approaches are followed. First, the Green’s function is expanded with respect to eigenmodes that correspond to Rayleigh waves. While bulk waves are thus ignored, this approximation is valid on the surface far from the source, where the Rayleigh wave modes dominate. The second approach employs an angular spectrum that accounts for the bulk waves and yields a solution that may be separated into two terms. One is associated with bulk waves, the other with Rayleigh waves. The latter is proved to be identical to the Green’s function obtained following the first approach. The Green’s function obtained via angular spectrum decomposition is analyzed numerically in the time domain for different burial depths and distances to the receiver, and for parameters relevant to seismo-acoustic detection of land mines and other buried objects. PMID:22423682

  6. The contribution of different information sources for adverse effects data.

    PubMed

    Golder, Su; Loke, Yoon K

    2012-04-01

    The aim of this study is to determine the relative value and contribution of searching different sources to identify adverse effects data. The process of updating a systematic review and meta-analysis of thiazolidinedione-related fractures in patients with type 2 diabetes mellitus was used as a case study. For each source searched, a record was made for each relevant reference included in the review noting whether it was retrieved with the search strategy used and whether it was available but not retrieved. The sensitivity, precision, and number needed to read from searching each source and from different combinations of sources were also calculated. There were 58 relevant references which presented sufficient numerical data to be included in a meta-analysis of fractures and bone mineral density. The highest number of relevant references were retrieved from Science Citation Index (SCI) (35), followed by BIOSIS Previews (27) and EMBASE (24). The precision of the searches varied from 0.88% (Scirus) to 41.67% (CENTRAL). With the search strategies used, the minimum combination of sources required to retrieve all the relevant references was; the GlaxoSmithKline (GSK) website, Science Citation Index (SCI), EMBASE, BIOSIS Previews, British Library Direct, Medscape DrugInfo, handsearching and reference checking, AHFS First, and Thomson Reuters Integrity or Conference Papers Index (CPI). In order to identify all the relevant references for this case study a number of different sources needed to be searched. The minimum combination of sources required to identify all the relevant references did not include MEDLINE.

  7. Monte Carlo studies and optimization for the calibration system of the GERDA experiment

    NASA Astrophysics Data System (ADS)

    Baudis, L.; Ferella, A. D.; Froborg, F.; Tarka, M.

    2013-11-01

    The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in 76Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three 228Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat)-0.19+0.13(sys))×10-4 cts/(keV kg yr)) when shielded from below with 6 cm of tantalum in the parking position.

  8. C(m)-History Method, a Novel Approach to Simultaneously Measure Source and Sink Parameters Important for Estimating Indoor Exposures to Phthalates.

    PubMed

    Cao, Jianping; Weschler, Charles J; Luo, Jiajun; Zhang, Yinping

    2016-01-19

    The concentration of a gas-phase semivolatile organic compound (SVOC) in equilibrium with its mass-fraction in the source material, y0, and the coefficient for partitioning of an SVOC between clothing and air, K, are key parameters for estimating emission and subsequent dermal exposure to SVOCs. Most of the available methods for their determination depend on achieving steady-state in ventilated chambers. This can be time-consuming and of variable accuracy. Additionally, no existing method simultaneously determines y0 and K in a single experiment. In this paper, we present a sealed-chamber method, using early-stage concentration measurements, to simultaneously determine y0 and K. The measurement error for the method is analyzed, and the optimization of experimental parameters is explored. Using this method, y0 for phthalates (DiBP, DnBP, and DEHP) emitted by two types of PVC flooring, coupled with K values for these phthalates partitioning between a cotton T-shirt and air, were measured at 25 and 32 °C (room and skin temperatures, respectively). The measured y0 values agree well with results obtained by alternate methods. The changes of y0 and K with temperature were used to approximate the changes in enthalpy, ΔH, associated with the relevant phase changes. We conclude with suggestions for further related research.

  9. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    PubMed

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  10. Optimization of physiological parameter for macroscopic modeling of reacted singlet oxygen concentration in an in-vivo model

    NASA Astrophysics Data System (ADS)

    Wang, Ken Kang-Hsin; Busch, Theresa M.; Finlay, Jarod C.; Zhu, Timothy C.

    2009-02-01

    Singlet oxygen (1O2) is generally believed to be the major cytotoxic agent during photodynamic therapy (PDT), and the reaction between 1O2 and tumor cells define the treatment efficacy. From a complete set of the macroscopic kinetic equations which describe the photochemical processes of PDT, we can express the reacted 1O2 concentration, [1O2]rx, in a form related to time integration of the product of 1O2 quantum yield and the PDT dose rate. The production of [1O2]rx involves physiological and photophysical parameters which need to be determined explicitly for the photosensitizer of interest. Once these parameters are determined, we expect the computed [1O2]rx to be an explicit dosimetric indicator for clinical PDT. Incorporating the diffusion equation governing the light transport in turbid medium, the spatially and temporally-resolved [1O2]rx described by the macroscopic kinetic equations can be numerically calculated. A sudden drop of the calculated [1O2]rx along with the distance following the decrease of light fluence rate is observed. This suggests that a possible correlation between [1O2]rx and necrosis boundary may occur in the tumor subject to PDT irradiation. In this study, we have theoretically examined the sensitivity of the physiological parameter from two clinical related conditions: (1) collimated light source on semi-infinite turbid medium and (2) linear light source in turbid medium. In order to accurately determine the parameter in a clinical relevant environment, the results of the computed [1O2]rx are expected to be used to fit the experimentally-measured necrosis data obtained from an in vivo animal model.

  11. Improving the local wavenumber method by automatic DEXP transformation

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  12. X-Ray Spectro-Polarimetry with Photoelectric Polarimeters

    NASA Technical Reports Server (NTRS)

    Strohmayer, T. E.

    2017-01-01

    We derive a generalization of forward fitting for X-ray spectroscopy to include linear polarization of X-ray sources, appropriate for the anticipated next generation of space-based photoelectric polarimeters. We show that the inclusion of polarization sensitivity requires joint fitting to three observed spectra, one for each of the Stokes parameters, I(E), U(E), and Q(E). The equations for StokesI (E) (the total intensity spectrum) are identical to the familiar case with no polarization sensitivity, and for which the model-predicted spectrum is obtained by a convolution of the source spectrum, F (E), with the familiar energy response function,(E) R(E,E), where (E) and R(E,E) are the effective area and energy redistribution matrix, respectively. In addition to the energy spectrum, the two new relations for U(E) and Q(E) include the source polarization fraction and position angle versus energy, a(E), and 0(E), respectively, and the model-predicted spectra for these relations are obtained by a convolution with the modulated energy response function, (E)(E) R(E,E), where(E) is the energy-dependent modulation fraction that quantifies a polarimeters angular response to 100 polarized radiation. We present results of simulations with response parameters appropriate for the proposed PRAXyS Small Explorer observatory to illustrate the procedures and methods, and we discuss some aspects of photoelectric polarimeters with relevance to understanding their calibration and operation.

  13. Active fault databases: building a bridge between earthquake geologists and seismic hazard practitioners, the case of the QAFI v.3 database

    NASA Astrophysics Data System (ADS)

    García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João

    2017-08-01

    Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.

  14. Mapping (dis)agreement in hydrologic projections

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Addor, Nans; Mizukami, Naoki; Newman, Andrew J.; Torfs, Paul J. J. F.; Clark, Martyn P.; Uijlenhoet, Remko; Teuling, Adriaan J.

    2018-03-01

    Hydrologic projections are of vital socio-economic importance. However, they are also prone to uncertainty. In order to establish a meaningful range of storylines to support water managers in decision making, we need to reveal the relevant sources of uncertainty. Here, we systematically and extensively investigate uncertainty in hydrologic projections for 605 basins throughout the contiguous US. We show that in the majority of the basins, the sign of change in average annual runoff and discharge timing for the period 2070-2100 compared to 1985-2008 differs among combinations of climate models, hydrologic models, and parameters. Mapping the results revealed that different sources of uncertainty dominate in different regions. Hydrologic model induced uncertainty in the sign of change in mean runoff was related to snow processes and aridity, whereas uncertainty in both mean runoff and discharge timing induced by the climate models was related to disagreement among the models regarding the change in precipitation. Overall, disagreement on the sign of change was more widespread for the mean runoff than for the discharge timing. The results demonstrate the need to define a wide range of quantitative hydrologic storylines, including parameter, hydrologic model, and climate model forcing uncertainty, to support water resource planning.

  15. Assessment of groundwater vulnerability to pollution: a combination of GIS, fuzzy logic and decision making techniques

    NASA Astrophysics Data System (ADS)

    Gemitzi, Alexandra; Petalas, Christos; Tsihrintzis, Vassilios A.; Pisinaras, Vassilios

    2006-03-01

    The assessment of groundwater vulnerability to pollution aims at highlighting areas at a high risk of being polluted. This study presents a methodology, to estimate the risk of an aquifer to be polluted from concentrated and/or dispersed sources, which applies an overlay and index method involving several parameters. The parameters are categorized into three factor groups: factor group 1 includes parameters relevant to the internal aquifer system’s properties, thus determining the intrinsic aquifer vulnerability to pollution; factor group 2 comprises parameters relevant to the external stresses to the system, such as human activities and rainfall effects; factor group 3 incorporates specific geological settings, such as the presence of geothermal fields or salt intrusion zones, into the computation process. Geographical information systems have been used for data acquisition and processing, coupled with a multicriteria evaluation technique enhanced with fuzzy factor standardization. Moreover, besides assigning weights to factors, a second set of weights, i.e., order weights, has been applied to factors on a pixel by pixel basis, thus allowing control of the level of risk in the vulnerability determination and the enhancement of local site characteristics. Individual analysis of each factor group resulted in three intermediate groundwater vulnerability to pollution maps, which were combined in order to produce the final composite groundwater vulnerability map for the study area. The method has been applied in the region of Eastern Macedonia and Thrace (Northern Greece), an area of approximately 14,000 km2. The methodology has been tested and calibrated against the measured nitrate concentration in wells, in the northwest part of the study area, providing results related to the aggregation and weighting procedure.

  16. Probing ultra-fast processes with high dynamic range at 4th-generation light sources: Arrival time and intensity binning at unprecedented repetition rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovalev, S.; Green, B.; Golz, T.

    Here, understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systemsmore » and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.« less

  17. Probing ultra-fast processes with high dynamic range at 4th-generation light sources: Arrival time and intensity binning at unprecedented repetition rates

    DOE PAGES

    Kovalev, S.; Green, B.; Golz, T.; ...

    2017-03-06

    Here, understanding dynamics on ultrafast timescales enables unique and new insights into important processes in the materials and life sciences. In this respect, the fundamental pump-probe approach based on ultra-short photon pulses aims at the creation of stroboscopic movies. Performing such experiments at one of the many recently established accelerator-based 4th-generation light sources such as free-electron lasers or superradiant THz sources allows an enormous widening of the accessible parameter space for the excitation and/or probing light pulses. Compared to table-top devices, critical issues of this type of experiment are fluctuations of the timing between the accelerator and external laser systemsmore » and intensity instabilities of the accelerator-based photon sources. Existing solutions have so far been only demonstrated at low repetition rates and/or achieved a limited dynamic range in comparison to table-top experiments, while the 4th generation of accelerator-based light sources is based on superconducting radio-frequency technology, which enables operation at MHz or even GHz repetition rates. In this article, we present the successful demonstration of ultra-fast accelerator-laser pump-probe experiments performed at an unprecedentedly high repetition rate in the few-hundred-kHz regime and with a currently achievable optimal time resolution of 13 fs (rms). Our scheme, based on the pulse-resolved detection of multiple beam parameters relevant for the experiment, allows us to achieve an excellent sensitivity in real-world ultra-fast experiments, as demonstrated for the example of THz-field-driven coherent spin precession.« less

  18. Compositional and textural information from the dual inversion of visible, near and thermal infrared remotely sensed data

    NASA Technical Reports Server (NTRS)

    Brackett, Robert A.; Arvidson, Raymond E.

    1993-01-01

    A technique is presented that allows extraction of compositional and textural information from visible, near and thermal infrared remotely sensed data. Using a library of both emissivity and reflectance spectra, endmember abundances and endmember thermal inertias are extracted from AVIRIS (Airborne Visible and Infrared Imaging Spectrometer) and TIMS (Thermal Infrared Mapping Spectrometer) data over Lunar Crater Volcanic Field, Nevada, using a dual inversion. The inversion technique is motivated by upcoming Mars Observer data and the need for separation of composition and texture parameters from sub pixel mixtures of bedrock and dust. The model employed offers the opportunity to extract compositional and textural information for a variety of endmembers within a given pixel. Geologic inferences concerning grain size, abundance, and source of endmembers can be made directly from the inverted data. These parameters are of direct relevance to Mars exploration, both for Mars Observer and for follow-on missions.

  19. Season-ahead water quality forecasts for the Schuylkill River, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Leung, K.

    2013-12-01

    Anticipating and preparing for elevated water quality parameter levels in critical water sources, using weather forecasts, is not uncommon. In this study, we explore the feasibility of extending this prediction scale to a season-ahead for the Schuylkill River in Philadelphia, utilizing both statistical and dynamical prediction models, to characterize the season. This advance information has relevance for recreational activities, ecosystem health, and water treatment, as the Schuylkill provides 40% of Philadelphia's water supply. The statistical model associates large-scale climate drivers with streamflow and water quality parameter levels; numerous variables from NOAA's CFSv2 model are evaluated for the dynamical approach. A multi-model combination is also assessed. Results indicate moderately skillful prediction of average summertime total coliform and wintertime turbidity, using season-ahead oceanic and atmospheric variables, predominantly from the North Atlantic Ocean. Models predicting the number of elevated turbidity events across the wintertime season are also explored.

  20. Effects of noise levels and call types on the source levels of killer whale calls.

    PubMed

    Holt, Marla M; Noren, Dawn P; Emmons, Candice K

    2011-11-01

    Accurate parameter estimates relevant to the vocal behavior of marine mammals are needed to assess potential effects of anthropogenic sound exposure including how masking noise reduces the active space of sounds used for communication. Information about how these animals modify their vocal behavior in response to noise exposure is also needed for such assessment. Prior studies have reported variations in the source levels of killer whale sounds, and a more recent study reported that killer whales compensate for vessel masking noise by increasing their call amplitude. The objectives of the current study were to investigate the source levels of a variety of call types in southern resident killer whales while also considering background noise level as a likely factor related to call source level variability. The source levels of 763 discrete calls along with corresponding background noise were measured over three summer field seasons in the waters surrounding the San Juan Islands, WA. Both noise level and call type were significant factors on call source levels (1-40 kHz band, range of 135.0-175.7 dB(rms) re 1 [micro sign]Pa at 1 m). These factors should be considered in models that predict how anthropogenic masking noise reduces vocal communication space in marine mammals.

  1. Data integration for inference about spatial processes: A model-based approach to test and account for data inconsistency

    PubMed Central

    Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio

    2017-01-01

    Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034

  2. Strain Transient Detection Techniques: A Comparison of Source Parameter Inversions of Signals Isolated through Principal Component Analysis (PCA), Non-Linear PCA, and Rotated PCA

    NASA Astrophysics Data System (ADS)

    Lipovsky, B.; Funning, G. J.

    2009-12-01

    We compare several techniques for the analysis of geodetic time series with the ultimate aim to characterize the physical processes which are represented therein. We compare three methods for the analysis of these data: Principal Component Analysis (PCA), Non-Linear PCA (NLPCA), and Rotated PCA (RPCA). We evaluate each method by its ability to isolate signals which may be any combination of low amplitude (near noise level), temporally transient, unaccompanied by seismic emissions, and small scale with respect to the spatial domain. PCA is a powerful tool for extracting structure from large datasets which is traditionally realized through either the solution of an eigenvalue problem or through iterative methods. PCA is an transformation of the coordinate system of our data such that the new "principal" data axes retain maximal variance and minimal reconstruction error (Pearson, 1901; Hotelling, 1933). RPCA is achieved by an orthogonal transformation of the principal axes determined in PCA. In the analysis of meteorological data sets, RPCA has been seen to overcome domain shape dependencies, correct for sampling errors, and to determine principal axes which more closely represent physical processes (e.g., Richman, 1986). NLPCA generalizes PCA such that principal axes are replaced by principal curves (e.g., Hsieh 2004). We achieve NLPCA through an auto-associative feed-forward neural network (Scholz, 2005). We show the geophysical relevance of these techniques by application of each to a synthetic data set. Results are compared by inverting principal axes to determine deformation source parameters. Temporal variability in source parameters, estimated by each method, are also compared.

  3. Trends and sources vs air mass origins in a major city in South-western Europe: Implications for air quality management.

    PubMed

    Fernández-Camacho, R; de la Rosa, J D; Sánchez de la Campa, A M

    2016-05-15

    This study presents a 17-years air quality database comprised of different parameters corresponding to the largest city in the south of Spain (Seville) where atmospheric pollution is frequently attributed to traffic emissions and is directly affected by Saharan dust outbreaks. We identify the PM10 contributions from both natural and anthropogenic sources in this area associated to different air mass origins. Hourly, daily and seasonal variation of PM10 and gaseous pollutant concentrations (CO, NO2 and SO2), all of them showing negative trends during the study period, point to the traffic as one of the main sources of air pollution in Seville. Mineral dust, secondary inorganic compounds (SIC) and trace elements showed higher concentrations under North African (NAF) air mass origins than under Atlantic. We observe a decreasing trend in all chemical components of PM10 under both types of air masses, NAF and Atlantic. Principal component analysis using more frequent air masses in the area allows the identification of five PM10 sources: crustal, regional, marine, traffic and industrial. Natural sources play a more relevant role during NAF events (20.6 μg · m(-3)) than in Atlantic episodes (13.8 μg · m(-3)). The contribution of the anthropogenic sources under NAF doubles the one under Atlantic conditions (33.6 μg · m(-3) and 15.8 μg · m(-3), respectively). During Saharan dust outbreaks the frequent accumulation of local anthropogenic pollutants in the lower atmosphere results in poor air quality and an increased risk of mortality. The results are relevant when analysing the impact of anthropogenic emissions on the exposed population in large cities. The increase in potentially toxic elements during Saharan dust outbreaks should also be taken into account when discounting the number of exceedances attributable to non-anthropogenic or natural origins. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Searching for continuous gravitational wave sources in binary systems

    NASA Astrophysics Data System (ADS)

    Dhurandhar, Sanjeev V.; Vecchio, Alberto

    2001-06-01

    We consider the problem of searching for continuous gravitational wave (cw) sources orbiting a companion object. This issue is of particular interest because the Low mass x-ray binaries (LMXB's), and among them Sco X-1, the brightest x-ray source in the sky, might be marginally detectable with ~2 y coherent observation time by the Earth-based laser interferometers expected to come on line by 2002 and clearly observable by the second generation of detectors. Moreover, several radio pulsars, which could be deemed to be cw sources, are found to orbit a companion star or planet, and the LIGO-VIRGO-GEO600 network plans to continuously monitor such systems. We estimate the computational costs for a search launched over the additional five parameters describing generic elliptical orbits (up to e<~0.8) using match filtering techniques. These techniques provide the optimal signal-to-noise ratio and also a very clear and transparent theoretical framework. Since matched filtering will be implemented in the final and the most computationally expensive stage of the hierarchical strategies, the theoretical framework provided here can be used to determine the computational costs. In order to disentangle the computational burden involved in the orbital motion of the cw source from the other source parameters (position in the sky and spin down) and reduce the complexity of the analysis, we assume that the source is monochromatic (there is no intrinsic change in its frequency) and its location in the sky is exactly known. The orbital elements, on the other hand, are either assumed to be completely unknown or only partly known. We provide ready-to-use analytical expressions for the number of templates required to carry out the searches in the astrophysically relevant regions of the parameter space and how the computational cost scales with the ranges of the parameters. We also determine the critical accuracy to which a particular parameter must be known, so that no search is needed for it; we provide rigorous statements, based on the geometrical formulation of data analysis, concerning the size of the parameter space so that a particular neutron star is a one-filter target. This result is formulated in a completely general form, independent of the particular kind of source, and can be applied to any class of signals whose waveform can be accurately predicted. We apply our theoretical analysis to Sco X-1 and the 44 neutron stars with binary companions which are listed in the most updated version of the radio pulsar catalog. For up to ~3 h of coherent integration time, Sco X-1 will need at most a few templates; for 1 week integration time the number of templates rapidly rises to ~=5×106. This is due to the rather poor measurements available today of the projected semi-major axis and the orbital phase of the neutron star. If, however, the same search is to be carried out with only a few filters, then more refined measurements of the orbital parameters are called for-an improvement of about three orders of magnitude in the accuracy is required. Further, we show that the five NS's (radio pulsars) for which the upper limits on the signal strength are highest require no more than a few templates each and can be targeted very cheaply in terms of CPU time. Blind searches of the parameter space of orbital elements are, in general, completely un-affordable for present or near future dedicated computational resources, when the coherent integration time is of the order of the orbital period or longer. For wide binary systems, when the observation covers only a fraction of one orbit, the computational burden reduces enormously, and becomes affordable for a significant region of the parameter space.

  5. Material exposure effects in a simulated low-Earth orbit environment

    NASA Astrophysics Data System (ADS)

    Maldonado, C.; McHarg, G.; Asmolova, O.; Andersen, G.; Rodrigues, S.; Ketsdever, A.

    2016-11-01

    Spacecraft operating in low-Earth orbit (LEO) are subjected to a number of hazardous environmental constituents that can lead to decreased system performance and reduced operational lifetimes. Due to their thermal, optical, and mechanical properties, polymers are used extensively in space systems; however they are particularly susceptible to material erosion and degradation as a result of exposure to the LEO environment. The focus of this research is to examine the material erosion and mass loss experienced by the Novastrat 500 polyimide due to exposure in a simulated LEO environment. In addition to the polymer samples, chrome, silver and gold specimens will be examined to measure the oxidation rate and act as a control specimen, respectively. A magnetically filtered atomic oxygen plasma source has previously been developed and characterized for the purpose of simulating the low-Earth orbit environment. The plasma source can be operated at a variety of discharge currents and gas flow rates, of which the plasma parameters downstream of the source are dependent. The characteristics of the generated plasma were examined as a function of these operating parameters to optimize the production of O+ ions with energy relevant to LEO applications, where the ram energy of the ions due to the motion of the satellite relative to the LEO plasma is high (e.g. 7800 m/s, which corresponds to approximately 5 eV of kinetic energy for O+ ions). The plasma downstream of the source consists of streaming ions with energy of approximately 5 eV and an ion species fraction that is approximately 90% O+.

  6. Progress in Fast Ignition Studies with Electrons and Protons

    NASA Astrophysics Data System (ADS)

    MacKinnon, A. J.; Akli, K. U.; Bartal, T.; Beg, F. N.; Chawla, S.; Chen, C. D.; Chen, H.; Chen, S.; Chowdhury, E.; Fedosejevs, R.; Freeman, R. R.; Hey, D.; Higginson, D.; Key, M. H.; King, J. A.; Link, A.; Ma, T.; MacPhee, A. G.; Offermann, D.; Ovchinnikov, V.; Pasley, J.; Patel, P. K.; Ping, Y.; Schumacher, D. W.; Stephens, R. B.; Tsui, Y. Y.; Wei, M. S.; Van Woerkom, L. D.

    2009-09-01

    Isochoric heating of inertially confined fusion plasmas by laser driven MeV electrons or protons is an area of great topical interest in the inertial confinement fusion community, particularly with respect to the fast ignition (FI) concept for initiating burn in a fusion capsule. In order to investigate critical aspects needed for a FI point design, experiments were performed to study 1) laser-to-electrons or protons conversion issues and 2) laser-cone interactions including prepulse effects. A large suite of diagnostics was utilized to study these important parameters. Using cone—wire surrogate targets it is found that pre-pulse levels on medium scale lasers such as Titan at Lawrence Livermore National Laboratory produce long scale length plasmas that strongly effect coupling of the laser to FI relevant electrons inside cones. The cone wall thickness also affects coupling to the wire. Conversion efficiency to protons has also been measured and modeled as a function of target thickness, material. Conclusions from the proton and electron source experiments will be presented. Recent advances in modeling electron transport and innovative target designs for reducing igniter energy and increasing gain curves will also be discussed. In conclusion, a program of study will be presented based on understanding the fundamental physics of the electron or proton source relevant to FI.

  7. Parameter estimation of qubit states with unknown phase parameter

    NASA Astrophysics Data System (ADS)

    Suzuki, Jun

    2015-02-01

    We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.

  8. Electronic dendrometer

    DOEpatents

    Sauer, deceased, Ronald H.; Beedlow, Peter A.

    1985-01-01

    Disclosed is a dendrometer for use on soft stemmed herbaceous plants. The dendrometer uses elongated jaws to engage the plant stem securely but without appreciable distortion or collapse of the stem. A transducer made of flexible, noncorrodible and temperature stable material spans between the jaws which engage the plant stem. Strain gauges are attached at appropriate locations on a transducer member and are connected to a voltage source and voltmeter to monitor changes in plant stem size. A microprocessor can be used to integrate the plant stem size information with other relevant environmental parameters and the data can be recorded on magnetic tape or used in other data processing equipment.

  9. CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Cheung, Joey P.; McGuinness, Christopher; Solberg, Timothy D.

    2017-07-01

    The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.

  10. Utilization of Ancillary Data Sets for SMAP Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    ONeill, P.; Podest, E.; Njoku, E.

    2011-01-01

    Algorithms being developed for the Soil Moisture Active Passive (SMAP) mission require a variety of both static and ancillary data. The selection of the most appropriate source for each ancillary data parameter is driven by a number of considerations, including accuracy, latency, availability, and consistency across all SMAP products and with SMOS (Soil Moisture Ocean Salinity). It is anticipated that initial selection of all ancillary datasets, which are needed for ongoing algorithm development activities on the SMAP algorithm testbed at JPL, will be completed within the year. These datasets will be updated as new or improved sources become available, and all selections and changes will be documented for the benefit of the user community. Wise choices in ancillary data will help to enable SMAP to provide new global measurements of soil moisture and freeze/thaw state at the targeted accuracy necessary to tackle hydrologically-relevant societal issues.

  11. CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife.

    PubMed

    Kearney, Vasant; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D

    2017-06-26

    The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.

  12. 2D dose distribution images of a hybrid low field MRI-γ detector

    NASA Astrophysics Data System (ADS)

    Abril, A.; Agulles-Pedrós, L.

    2016-07-01

    The proposed hybrid system is a combination of a low field MRI and dosimetric gel as a γ detector. The readout system is based on the polymerization process induced by the gel radiation. A gel dose map is obtained which represents the functional part of hybrid image alongside with the anatomical MRI one. Both images should be taken while the patient with a radiopharmaceutical is located inside the MRI system with a gel detector matrix. A relevant aspect of this proposal is that the dosimetric gel has never been used to acquire medical images. The results presented show the interaction of the 99mTc source with the dosimetric gel simulated in Geant4. The purpose was to obtain the planar γ 2D-image. The different source configurations are studied to explore the ability of the gel as radiation detector through the following parameters; resolution, shape definition and radio-pharmaceutical concentration.

  13. Real-time Forensic Disaster Analysis

    NASA Astrophysics Data System (ADS)

    Wenzel, F.; Daniell, J.; Khazai, B.; Mühr, B.; Kunz-Plapp, T.; Markus, M.; Vervaeck, A.

    2012-04-01

    The Center for Disaster Management and Risk Reduction Technology (CEDIM, www.cedim.de) - an interdisciplinary research center founded by the German Research Centre for Geoscience (GFZ) and Karlsruhe Institute of Technology (KIT) - has embarked on a new style of disaster research known as Forensic Disaster Analysis. The notion has been coined by the Integrated Research on Disaster Risk initiative (IRDR, www.irdrinternational.org) launched by ICSU in 2010. It has been defined as an approach to studying natural disasters that aims at uncovering the root causes of disasters through in-depth investigations that go beyond the reconnaissance reports and case studies typically conducted after disasters. In adopting this comprehensive understanding of disasters CEDIM adds a real-time component to the assessment and evaluation process. By comprehensive we mean that most if not all relevant aspects of disasters are considered and jointly analysed. This includes the impact (human, economy, and infrastructure), comparisons with recent historic events, social vulnerability, reconstruction and long-term impacts on livelihood issues. The forensic disaster analysis research mode is thus best characterized as "event-based research" through systematic investigation of critical issues arising after a disaster across various inter-related areas. The forensic approach requires (a) availability of global data bases regarding previous earthquake losses, socio-economic parameters, building stock information, etc.; (b) leveraging platforms such as the EERI clearing house, relief-web, and the many sources of local and international sources where information is organized; and (c) rapid access to critical information (e.g., crowd sourcing techniques) to improve our understanding of the complex dynamics of disasters. The main scientific questions being addressed are: What are critical factors that control loss of life, of infrastructure, and for economy? What are the critical interactions between hazard - socio-economic systems - technological systems? What were the protective measures and to what extent did they work? Can we predict pattern of losses and socio-economic implications for future extreme events from simple parameters: hazard parameters, historic evidence, socio-economic conditions? Can we predict implications for reconstruction from simple parameters: hazard parameters, historic evidence, socio-economic conditions? The M7.2 Van Earthquake (Eastern Turkey) of 23 Oct. 2011 serves as an example for a forensic approach.

  14. NMReDATA, a standard to report the NMR assignment and parameters of organic compounds.

    PubMed

    Pupier, Marion; Nuzillard, Jean-Marc; Wist, Julien; Schlörer, Nils E; Kuhn, Stefan; Erdelyi, Mate; Steinbeck, Christoph; Williams, Antony J; Butts, Craig; Claridge, Tim D W; Mikhova, Bozhana; Robien, Wolfgang; Dashti, Hesam; Eghbalnia, Hamid R; Farès, Christophe; Adam, Christian; Kessler, Pavel; Moriaud, Fabrice; Elyashberg, Mikhail; Argyropoulos, Dimitris; Pérez, Manuel; Giraudeau, Patrick; Gil, Roberto R; Trevorrow, Paul; Jeannerat, Damien

    2018-04-14

    Even though NMR has found countless applications in the field of small molecule characterization, there is no standard file format available for the NMR data relevant to structure characterization of small molecules. A new format is therefore introduced to associate the NMR parameters extracted from 1D and 2D spectra of organic compounds to the proposed chemical structure. These NMR parameters, which we shall call NMReDATA (for nuclear magnetic resonance extracted data), include chemical shift values, signal integrals, intensities, multiplicities, scalar coupling constants, lists of 2D correlations, relaxation times, and diffusion rates. The file format is an extension of the existing Structure Data Format, which is compatible with the commonly used MOL format. The association of an NMReDATA file with the raw and spectral data from which it originates constitutes an NMR record. This format is easily readable by humans and computers and provides a simple and efficient way for disseminating results of structural chemistry investigations, allowing automatic verification of published results, and for assisting the constitution of highly needed open-source structural databases. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-01-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  16. Failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-08-01

    Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  17. Review of clinical brachytherapy uncertainties: Analysis guidelines of GEC-ESTRO and the AAPM☆

    PubMed Central

    Kirisits, Christian; Rivard, Mark J.; Baltas, Dimos; Ballester, Facundo; De Brabandere, Marisol; van der Laarse, Rob; Niatsetski, Yury; Papagiannis, Panagiotis; Hellebust, Taran Paulsen; Perez-Calatayud, Jose; Tanderup, Kari; Venselaar, Jack L.M.; Siebert, Frank-André

    2014-01-01

    Background and purpose A substantial reduction of uncertainties in clinical brachytherapy should result in improved outcome in terms of increased local control and reduced side effects. Types of uncertainties have to be identified, grouped, and quantified. Methods A detailed literature review was performed to identify uncertainty components and their relative importance to the combined overall uncertainty. Results Very few components (e.g., source strength and afterloader timer) are independent of clinical disease site and location of administered dose. While the influence of medium on dose calculation can be substantial for low energy sources or non-deeply seated implants, the influence of medium is of minor importance for high-energy sources in the pelvic region. The level of uncertainties due to target, organ, applicator, and/or source movement in relation to the geometry assumed for treatment planning is highly dependent on fractionation and the level of image guided adaptive treatment. Most studies to date report the results in a manner that allows no direct reproduction and further comparison with other studies. Often, no distinction is made between variations, uncertainties, and errors or mistakes. The literature review facilitated the drafting of recommendations for uniform uncertainty reporting in clinical BT, which are also provided. The recommended comprehensive uncertainty investigations are key to obtain a general impression of uncertainties, and may help to identify elements of the brachytherapy treatment process that need improvement in terms of diminishing their dosimetric uncertainties. It is recommended to present data on the analyzed parameters (distance shifts, volume changes, source or applicator position, etc.), and also their influence on absorbed dose for clinically-relevant dose parameters (e.g., target parameters such as D90 or OAR doses). Publications on brachytherapy should include a statement of total dose uncertainty for the entire treatment course, taking into account the fractionation schedule and level of image guidance for adaptation. Conclusions This report on brachytherapy clinical uncertainties represents a working project developed by the Brachytherapy Physics Quality Assurances System (BRAPHYQS) subcommittee to the Physics Committee within GEC-ESTRO. Further, this report has been reviewed and approved by the American Association of Physicists in Medicine. PMID:24299968

  18. Recommendations for Guidelines for Environment-Specific Magnetic-Field Measurements, Rapid Program Engineering Project #2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Electric Research and Management, Inc.; IIT Research Institute; Magnetic Measurements

    1997-03-11

    The purpose of this project was to document widely applicable methods for characterizing the magnetic fields in a given environment, recognizing the many sources co-existing within that space. The guidelines are designed to allow the reader to follow an efficient process to (1) plan the goals and requirements of a magnetic-field study, (2) develop a study structure and protocol, and (3) document and carry out the plan. These guidelines take the reader first through the process of developing a basic study strategy, then through planning and performing the data collection. Last, the critical factors of data management, analysis reporting, andmore » quality assurance are discussed. The guidelines are structured to allow the researcher to develop a protocol that responds to specific site and project needs. The Research and Public Information Dissemination Program (RAPID) is based on exposure to magnetic fields and the potential health effects. Therefore, the most important focus for these magnetic-field measurement guidelines is relevance to exposure. The assumed objective of an environment-specific measurement is to characterize the environment (given a set of occupants and magnetic-field sources) so that information about the exposure of the occupants may be inferred. Ideally, the researcher seeks to obtain complete or "perfect" information about these magnetic fields, so that personal exposure might also be modeled perfectly. However, complete data collection is not feasible. In fact, it has been made more difficult as the research field has moved to expand the list of field parameters measured, increasing the cost and complexity of performing a measurement and analyzing the data. The guidelines address this issue by guiding the user to design a measurement protocol that will gather the most exposure-relevant information based on the locations of people in relation to the sources. We suggest that the "microenvironment" become the base unit of area in a study, with boundaries defined by the occupant's activity patterns and the field variation from the sources affecting the area. Such a stratification allows the researcher to determine which microenvironment are of most interest, and to methodically focus the areas, in order to gather the most relevant set of data.« less

  19. Why the impact of mechanical stimuli on stem cells remains a challenge.

    PubMed

    Goetzke, Roman; Sechi, Antonio; De Laporte, Laura; Neuss, Sabine; Wagner, Wolfgang

    2018-05-04

    Mechanical stimulation affects growth and differentiation of stem cells. This may be used to guide lineage-specific cell fate decisions and therefore opens fascinating opportunities for stem cell biology and regenerative medicine. Several studies demonstrated functional and molecular effects of mechanical stimulation but on first sight these results often appear to be inconsistent. Comparison of such studies is hampered by a multitude of relevant parameters that act in concert. There are notorious differences between species, cell types, and culture conditions. Furthermore, the utilized culture substrates have complex features, such as surface chemistry, elasticity, and topography. Cell culture substrates can vary from simple, flat materials to complex 3D scaffolds. Last but not least, mechanical forces can be applied with different frequency, amplitude, and strength. It is therefore a prerequisite to take all these parameters into consideration when ascribing their specific functional relevance-and to only modulate one parameter at the time if the relevance of this parameter is addressed. Such research questions can only be investigated by interdisciplinary cooperation. In this review, we focus particularly on mesenchymal stem cells and pluripotent stem cells to discuss relevant parameters that contribute to the kaleidoscope of mechanical stimulation of stem cells.

  20. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  1. The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.

    2017-08-01

    The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.

  2. Photon Statistics of Propagating Thermal Microwaves

    NASA Astrophysics Data System (ADS)

    Deppe, F.; Goetz, J.; Eder, P.; Fischer, M.; Pogorzalek, S.; Xie, E.; Fedorov, K. G.; Marx, A.; Gross, R.

    In experiments with superconducting quantum circuits, characterizing the photon statistics of propagating microwave fields is a fundamental task. This task is in particular relevant for thermal fields, which are omnipresent noise sources in superconducting quantum circuits covering all relevant frequency regimes. We quantify the n2 + n photon number variance of thermal microwave photons emitted from a black-body radiator for mean photon numbers 0 . 05 <= n <= 1 . 5. In addition, we also use the fields as a sensitive probe for second-order decoherence effects of the qubit. Specifically, we investigate the influence of thermal fields on the low-frequency spectrum of the qubit parameter fluctuations. We find an enhacement of the white noise contribution of the noise power spectral density. Our data confirms a model of thermally activated two-level states interacting with the qubit. Supported by the German Research Foundation through FE 1564/1-1, the doctorate programs ExQM of the Elite Network of Bavaria, and the IMPRS Quantum Science and Technology.

  3. Uranium in groundwater--Fertilizers versus geogenic sources.

    PubMed

    Liesch, Tanja; Hinrichsen, Sören; Goldscheider, Nico

    2015-12-01

    Due to its radiological and toxicological properties even at low concentration levels, uranium is increasingly recognized as relevant contaminant in drinking water from aquifers. Uranium originates from different sources, including natural or geogenic, mining and industrial activities, and fertilizers in agriculture. The goal of this study was to obtain insights into the origin of uranium in groundwater while differentiating between geogenic sources and fertilizers. A literature review concerning the sources and geochemical processes affecting the occurrence and distribution of uranium in the lithosphere, pedosphere and hydrosphere provided the background for the evaluation of data on uranium in groundwater at regional scale. The state of Baden-Württemberg, Germany, was selected for this study, because of its hydrogeological and land-use diversity, and for reasons of data availability. Uranium and other parameters from N=1935 groundwater monitoring sites were analyzed statistically and geospatially. Results show that (i) 1.6% of all water samples exceed the German legal limit for drinking water (10 μg/L); (ii) The range and spatial distribution of uranium and occasional peak values seem to be related to geogenic sources; (iii) There is a clear relation between agricultural land-use and low-level uranium concentrations, indicating that fertilizers generate a measurable but low background of uranium in groundwater. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Simulation of a suite of generic long-pulse neutron instruments to optimize the time structure of the European Spallation Source.

    PubMed

    Lefmann, Kim; Klenø, Kaspar H; Birk, Jonas Okkels; Hansen, Britt R; Holm, Sonja L; Knudsen, Erik; Lieutenant, Klaus; von Moos, Lars; Sales, Morten; Willendrup, Peter K; Andersen, Ken H

    2013-05-01

    We here describe the result of simulations of 15 generic neutron instruments for the long-pulsed European Spallation Source. All instruments have been simulated for 20 different settings of the source time structure, corresponding to pulse lengths between 1 ms and 2 ms; and repetition frequencies between 10 Hz and 25 Hz. The relative change in performance with time structure is given for each instrument, and an unweighted average is calculated. The performance of the instrument suite is proportional to (a) the peak flux and (b) the duty cycle to a power of approximately 0.3. This information is an important input to determining the best accelerator parameters. In addition, we find that in our simple guide systems, most neutrons reaching the sample originate from the central 3-5 cm of the moderator. This result can be used as an input in later optimization of the moderator design. We discuss the relevance and validity of defining a single figure-of-merit for a full facility and compare with evaluations of the individual instrument classes.

  5. MP3 compression of Doppler ultrasound signals.

    PubMed

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  6. MICROANALYSIS OF MATERIALS USING SYNCHROTRON RADIATION.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JONES,K.W.; FENG,H.

    2000-12-01

    High intensity synchrotron radiation produces photons with wavelengths that extend from the infrared to hard x rays with energies of hundreds of keV with uniquely high photon intensities that can be used to determine the composition and properties of materials using a variety of techniques. Most of these techniques represent extensions of earlier work performed with ordinary tube-type x-ray sources. The properties of the synchrotron source such as the continuous range of energy, high degree of photon polarization, pulsed beams, and photon flux many orders of magnitude higher than from x-ray tubes have made possible major advances in the possiblemore » chemical applications. We describe here ways that materials analyses can be made using the high intensity beams for measurements with small beam sizes and/or high detection sensitivity. The relevant characteristics of synchrotron x-ray sources are briefly summarized to give an idea of the x-ray parameters to be exploited. The experimental techniques considered include x-ray fluorescence, absorption, and diffraction. Examples of typical experimental apparatus used in these experiments are considered together with descriptions of actual applications.« less

  7. The Cadarache negative ion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massmann, P.; Bottereau, J.M.; Belchenko, Y.

    1995-12-31

    Up to energies of 140 keV neutral beam injection (NBI) based on positive ions has proven to be a reliable and flexible plasma heating method and has provided major contributions to most of the important experiments on virtually all large tokamaks around the world. As a candidate for additional heating and current drive on next step fusion machines (ITER ao) it is hoped that NBI can be equally successful. The ITER NBI parameters of 1 MeV, 50 MW D{degree} demand primary D{sup {minus}} beams with current densities of at least 15 mA/cm{sup 2}. Although considerable progress has been made inmore » the area of negative ion production and acceleration the high demands still require substantial and urgent development. Regarding negative ion production Cs seeded plasma sources lead the way. Adding a small amount of Cs to the discharge (Cs seeding) not only increases the negative ion yield by a factor 3--5 but also has the advantage that the discharge can be run at lower pressures. This is beneficial for the reduction of stripping losses in the accelerator. Multi-ampere negative ion production in a large plasma source is studied in the MANTIS experiment. Acceleration and neutralization at ITER relevant parameters is the objective of the 1 MV SINGAP experiment.« less

  8. Measuring the Interestingness of Articles in a Limited User Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pon, R; Cardenas, A; Buttler, David

    Search engines, such as Google, assign scores to news articles based on their relevance to a query. However, not all relevant articles for the query may be interesting to a user. For example, if the article is old or yields little new information, the article would be uninteresting. Relevance scores do not take into account what makes an article interesting, which would vary from user to user. Although methods such as collaborative filtering have been shown to be effective in recommendation systems, in a limited user environment, there are not enough users that would make collaborative filtering effective. A generalmore » framework, called iScore, is presented for defining and measuring the ‘‘interestingness of articles, incorporating user-feedback. iScore addresses the various aspects of what makes an article interesting, such as topic relevance, uniqueness, freshness, source reputation, and writing style. It employs various methods, such as multiple topic tracking, online parameter selection, language models, clustering, sentiment analysis, and phrase extraction to measure these features. Due to varying reasons that users hold about why an article is interesting, an online feature selection method in naι¨ve Bayes is also used to improve recommendation results. iScore can outperform traditional IR techniques by as much as 50.7%. iScore and its components are evaluated in the news recommendation task using three datasets from Yahoo! News, actual users, and Digg.« less

  9. Control-Relevant Modeling, Analysis, and Design for Scramjet-Powered Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Rodriguez, Armando A.; Dickeson, Jeffrey J.; Sridharan, Srikanth; Benavides, Jose; Soloway, Don; Kelkar, Atul; Vogel, Jerald M.

    2009-01-01

    Within this paper, control-relevant vehicle design concepts are examined using a widely used 3 DOF (plus flexibility) nonlinear model for the longitudinal dynamics of a generic carrot-shaped scramjet powered hypersonic vehicle. Trade studies associated with vehicle/engine parameters are examined. The impact of parameters on control-relevant static properties (e.g. level-flight trimmable region, trim controls, AOA, thrust margin) and dynamic properties (e.g. instability and right half plane zero associated with flight path angle) are examined. Specific parameters considered include: inlet height, diffuser area ratio, lower forebody compression ramp inclination angle, engine location, center of gravity, and mass. Vehicle optimizations is also examined. Both static and dynamic considerations are addressed. The gap-metric optimized vehicle is obtained to illustrate how this control-centric concept can be used to "reduce" scheduling requirements for the final control system. A classic inner-outer loop control architecture and methodology is used to shed light on how specific vehicle/engine design parameter selections impact control system design. In short, the work represents an important first step toward revealing fundamental tradeoffs and systematically treating control-relevant vehicle design.

  10. Evaluation of deep moonquake source parameters: Implication for fault characteristics and thermal state

    NASA Astrophysics Data System (ADS)

    Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi

    2017-07-01

    While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.

  11. MULTI-OBJECTIVE ONLINE OPTIMIZATION OF BEAM LIFETIME AT APS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yipeng

    In this paper, online optimization of beam lifetime at the APS (Advanced Photon Source) storage ring is presented. A general genetic algorithm (GA) is developed and employed for some online optimizations in the APS storage ring. Sextupole magnets in 40 sectors of the APS storage ring are employed as variables for the online nonlinear beam dynamics optimization. The algorithm employs several optimization objectives and is designed to run with topup mode or beam current decay mode. Up to 50\\% improvement of beam lifetime is demonstrated, without affecting the transverse beam sizes and other relevant parameters. In some cases, the top-upmore » injection efficiency is also improved.« less

  12. A new spectrometer for total reflection X-ray fluorescence analysis of light elements

    NASA Astrophysics Data System (ADS)

    Streli, Christina; Wobrauschek, Peter; Unfried, Ernst; Aiginger, Hannes

    1993-10-01

    A new spectrometer for total reflection X-ray fluorescence analysis (TXRF) of light elements as C, N, O, F, Na,… has been designed, constructed and realized. This was done under the aspect of optimizing all relevant parameters for excitation and detection under the conditions of Total Reflection in a vacuum chamber. A commercially available Ge(HP) detector with a diamond window offering a high transparency for low energy radiation was used. As excitation sources a special self-made windowless X-ray tube with Cu-target as well as a standard fine-focus Cr-tube were applied. Detection limits achieved are in the ng range for Carbon and Oxygen.

  13. On the long term evolution of white dwarfs in cataclysmic variables and their recurrence times

    NASA Technical Reports Server (NTRS)

    Sion, E. M.; Starrfield, S. G.

    1985-01-01

    The relevance of the long term quasi-static evolution of accreting white dwarfs to the outbursts of Z Andromeda-like symbiotics; the masses and accretion rates of classical nova white dwarfs; and the observed properties of white dwarfs detected optically and with IUE in low M dot cataclysmic variables is discussed. A surface luminosity versus time plot for a massive, hot white dwarf bears a remarkable similarity to the outburst behavior of the hot blue source in Z Andromeda. The long term quasi-static models of hot accreting white dwarfs provide convenient constraints on the theoretically permissible parameters to give a dynamical (nova-like) outburst of classic white dwarfs.

  14. Recent advances concerning an understanding of sound transmission through engine nozzles and jets

    NASA Technical Reports Server (NTRS)

    Bechert, D.; Michel, U.; Dfizenmaier, E.

    1978-01-01

    Experiments on the interaction between a turbulent jet and pure tone sound coming from inside the jet nozzle are reported. This is a model representing the sound transmission from sound sources in jet engines through the nozzle and the jet flow into the far field. It is shown that pure tone sound at low frequencies is considerably attenuated by the jet flow, whereas it is conserved at higher frequencies. On the other hand, broadband jet noise can be amplified considerably by a pure tone excitation. Both effects seem not to be interdependent. Knowledge on how they are created and on relevant parameter dependences allow new considerations for the development of sound attenuators.

  15. The evolution of phase holographic imaging from a research idea to publicly traded company

    NASA Astrophysics Data System (ADS)

    Egelberg, Peter

    2018-02-01

    Recognizing the value and unmet need for label-free kinetic cell analysis, Phase Holograhic Imaging defines its market segment as automated, easy to use and affordable time-lapse cytometry. The process of developing new technology, meeting customer expectations, sources of corporate funding and R&D adjustments prompted by field experience will be reviewed. Additionally, it is discussed how relevant biological information can be extracted from a sequence of quantitative phase images, with negligible user assistance and parameter tweaking, to simultaneously provide cell culture characteristics such as cell growth rate, viability, division rate, mitosis duration, phagocytosis rate, migration, motility and cell-cell adherence without requiring any artificial cell manipulation.

  16. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  17. Sleep mechanisms: Sleep deprivation and detection of changing levels of consciousness

    NASA Technical Reports Server (NTRS)

    Dement, W. C.; Barchas, J. D.

    1972-01-01

    An attempt was made to obtain information relevant to assessing the need to sleep and make up for lost sleep. Physiological and behavioral parameters were used as measuring parameters. Sleep deprivation in a restricted environment, derivation of data relevant to determining sleepiness from EEG, and the development of the Sanford Sleepiness Scale were discussed.

  18. Exploring relationships between Dairy Herd Improvement monitors of performance and the Transition Cow Index in Wisconsin dairy herds.

    PubMed

    Schultz, K K; Bennett, T B; Nordlund, K V; Döpfer, D; Cook, N B

    2016-09-01

    Transition cow management has been tracked via the Transition Cow Index (TCI; AgSource Cooperative Services, Verona, WI) since 2006. Transition Cow Index was developed to measure the difference between actual and predicted milk yield at first test day to evaluate the relative success of the transition period program. This project aimed to assess TCI in relation to all commonly used Dairy Herd Improvement (DHI) metrics available through AgSource Cooperative Services. Regression analysis was used to isolate variables that were relevant to TCI, and then principal components analysis and network analysis were used to determine the relative strength and relatedness among variables. Finally, cluster analysis was used to segregate herds based on similarity of relevant variables. The DHI data were obtained from 2,131 Wisconsin dairy herds with test-day mean ≥30 cows, which were tested ≥10 times throughout the 2014 calendar year. The original list of 940 DHI variables was reduced through expert-driven selection and regression analysis to 23 variables. The K-means cluster analysis produced 5 distinct clusters. Descriptive statistics were calculated for the 23 variables per cluster grouping. Using principal components analysis, cluster analysis, and network analysis, 4 parameters were isolated as most relevant to TCI; these were energy-corrected milk, 3 measures of intramammary infection (dry cow cure rate, linear somatic cell count score in primiparous cows, and new infection rate), peak ratio, and days in milk at peak milk production. These variables together with cow and newborn calf survival measures form a group of metrics that can be used to assist in the evaluation of overall transition period performance. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.

    PubMed

    Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F

    2013-10-01

    A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).

  20. Evaluation of drinking quality of groundwater through multivariate techniques in urban area.

    PubMed

    Das, Madhumita; Kumar, A; Mohapatra, M; Muduli, S D

    2010-07-01

    Groundwater is a major source of drinking water in urban areas. Because of the growing threat of debasing water quality due to urbanization and development, monitoring water quality is a prerequisite to ensure its suitability for use in drinking. But analysis of a large number of properties and parameter to parameter basis evaluation of water quality is not feasible in a regular interval. Multivariate techniques could streamline the data without much loss of information to a reasonably manageable data set. In this study, using principal component analysis, 11 relevant properties of 58 water samples were grouped into three statistical factors. Discriminant analysis identified "pH influence" as the most distinguished factor and pH, Fe, and NO₃⁻ as the most discriminating variables and could be treated as water quality indicators. These were utilized to classify the sampling sites into homogeneous clusters that reflect location-wise importance of specific indicator/s for use to monitor drinking water quality in the whole study area.

  1. Omnibus experiment: CPT and CP violation with sterile neutrinos

    NASA Astrophysics Data System (ADS)

    Loo, K. K.; Novikov, Yu N.; Smirnov, M. V.; Trzaska, W. H.; Wurm, M.

    2017-09-01

    The verification of the sterile neutrino hypothesis and, if confirmed, the determination of the relevant oscillation parameters is one of the goals of the neutrino physics in near future. We propose to search for the sterile neutrinos with a high statistics measurement utilizing the radioactive sources and oscillometric approach with large liquid scintillator detector like LENA, JUNO, or RENO-50. Our calculations indicate that such an experiment is realistic and could be performed in parallel to the main research plan for JUNO, LENA, or RENO-50. Assuming as the starting point the values of the oscillation parameters indicated by the current global fit (in 3 + 1 scenario) and requiring at least 5σ confidence level, we estimate that we would be able to detect differences in the mass squared differences Δ m41^2 of electron neutrinos and electron antineutrinos of the order of 1% or larger. That would allow to probe the CPT symmetry with neutrinos with an unprecedented accuracy.

  2. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  3. 50 CFR 424.13 - Sources of information and relevant data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS SUBCHAPTER A LISTING ENDANGERED AND THREATENED SPECIES AND DESIGNATING CRITICAL HABITAT Revision of the Lists § 424.13 Sources of information and relevant data. When considering any revision of the lists, the Secretary shall...

  4. 50 CFR 424.13 - Sources of information and relevant data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE); ENDANGERED SPECIES COMMITTEE REGULATIONS SUBCHAPTER A LISTING ENDANGERED AND THREATENED SPECIES AND DESIGNATING CRITICAL HABITAT Revision of the Lists § 424.13 Sources of information and relevant data. When considering any revision of the lists, the Secretary shall...

  5. The typological approach to submarine groundwater discharge (SGD)

    USGS Publications Warehouse

    Bokuniewicz, H.; Buddemeier, R.; Maxwell, B.; Smith, C.

    2003-01-01

    Coastal zone managers need to factor submarine groundwater discharge (SGD) in their integration. SGD provides a pathway for the transfer of freshwater, and its dissolved chemical burden, from the land to the coastal ocean. SGD reduces salinities and provides nutrients to specialized coastal habitats. It also can be a pollutant source, often undetected, causing eutrophication and triggering nuisance algal blooms. Despite its importance, SGD remains somewhat of a mystery in most places because it is usually unseen and difficult to measure. SGD has been directly measured at only about a hundred sites worldwide. A typology generated by the Land-Ocean Interaction in the Coastal Zone (LOICZ) Project is one of the few tools globally available to coastal resource managers for identifying areas in their jurisdiction where SGD may be a confounding process. (LOICZ is a core project of the International Geosphere/Biosphere Programme.) Of the hundreds of globally distributed parameters in the LOICZ typology, a SGD subset of potentially relevant parameters may be culled. A quantitative combination of the relevant hydrological parameters can serve as a proxy for the SGD conditions not directly measured. Web-LOICZ View, geospatial software then provides an automated approach to clustering these data into groups of locations that have similar characteristics. It permits selection of variables, of the number of clusters desired, and of the clustering criteria, and provides means of testing predictive results against independent variables. Information on the occurrence of a variety of SGD indicators can then be incorporated into regional clustering analysis. With such tools, coastal managers can focus attention on the most likely sites of SGD in their jurisdiction and design the necessary measurement and modeling programs needed for integrated management.

  6. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows

    PubMed Central

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-01

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios—of open, tilted, and closed windows—were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor–indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor–indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows. PMID:29346318

  7. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows.

    PubMed

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Röösli, Martin; Brink, Mark; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-18

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios-of open, tilted, and closed windows-were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor-indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor-indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows.

  8. Sensitivities to charged-current nonstandard neutrino interactions at DUNE

    NASA Astrophysics Data System (ADS)

    Bakhti, Pouya; Khan, Amir N.; Wang, W.

    2017-12-01

    We investigate the effects of charged-current (CC) nonstandard neutrino interactions (NSIs) at the source and at the detector in the simulated data for the planned Deep Underground Neutrino Experiment (DUNE). We neglect the neutral-current NSIs at the propagation because several solutions have already been proposed for resolving the degeneracies posed by neutral-current NSIs but no solutions exist for the degeneracies due to the CC NSIs. We study the effects of CC NSIs on the simultaneous measurements of {θ }23 and {δ }{{CP}} in DUNE. The analysis reveals that 3σ C.L. measurement of the correct octant of {θ }23 in the standard mixing scenario is spoiled if the CC NSIs are taken into account. Likewise, the CC NSIs can deteriorate the uncertainty of the {δ }{{CP}} measurement by a factor of two relative to that in the standard oscillation scenario. We also show that the source and the detector CC NSIs can induce a significant amount of fake CP-violation and the CP-conserving case can be excluded by more than 80% C.L. in the presence of fake CP-violation. We further find DUNE’s potential for constraining the relevant CC NSI parameters from the single parameter fits for both neutrino and antineutrino appearance and disappearance channels at both the near and far detectors. The results show that there could be improvements in the current bounds by at least one order of magnitude at DUNE’s near and far detectors, except for a few parameters which remain weaker at the far detector.

  9. Geostatistical characterisation of geothermal parameters for a thermal aquifer storage site in Germany

    NASA Astrophysics Data System (ADS)

    Rodrigo-Ilarri, J.; Li, T.; Grathwohl, P.; Blum, P.; Bayer, P.

    2009-04-01

    The design of geothermal systems such as aquifer thermal energy storage systems (ATES) must account for a comprehensive characterisation of all relevant parameters considered for the numerical design model. Hydraulic and thermal conductivities are the most relevant parameters and its distribution determines not only the technical design but also the economic viability of such systems. Hence, the knowledge of the spatial distribution of these parameters is essential for a successful design and operation of such systems. This work shows the first results obtained when applying geostatistical techniques to the characterisation of the Esseling Site in Germany. In this site a long-term thermal tracer test (> 1 year) was performed. On this open system the spatial temperature distribution inside the aquifer was observed over time in order to obtain as much information as possible that yield to a detailed characterisation both of the hydraulic and thermal relevant parameters. This poster shows the preliminary results obtained for the Esseling Site. It has been observed that the common homogeneous approach is not sufficient to explain the observations obtained from the TRT and that parameter heterogeneity must be taken into account.

  10. Antimicrobial activity of extracts from macroalgae Ulva lactuca against clinically important Staphylococci is impacted by lunar phase of macroalgae harvest.

    PubMed

    Deveau, A M; Miller-Hope, Z; Lloyd, E; Williams, B S; Bolduc, C; Meader, J M; Weiss, F; Burkholder, K M

    2016-05-01

    Staphylococcus aureus is a common human bacterial pathogen that causes skin and soft tissue infections. Methicillin-resistant Staph. aureus (MRSA) are increasingly drug-resistant, and thus there is great need for new therapeutics to treat Staph. aureus infections. Attention has focused on potential utility of natural products, such as extracts of marine macroalgae, as a source of novel antimicrobial compounds. The green macroalgae Ulva lactuca produces compounds inhibitory to human pathogens, although the effectiveness of U. lactuca extracts against clinically relevant strains of Staph. aureus is poorly understood. In addition, macroalgae produce secondary metabolites that may be influenced by exogenous factors including lunar phase, but whether lunar phase affects U. lactuca antimicrobial capacity is unknown. We sought to evaluate the antibacterial properties of U. lactuca extracts against medically important Staphylococci, and to determine the effect of lunar phase on antimicrobial activity. We report that U. lactuca methanolic extracts inhibit a range of Staphylococci, and that lunar phase of macrolagae harvest significantly impacts antimicrobial activity, suggesting that antimicrobial properties can be maximized by manipulating time of algal harvest. These findings provide useful parameters for future studies aimed at isolating and characterizing U. lactuca anti-Staphylococcal agents. The growing prevalence of antibiotic-resistant human pathogens such as methicillin-resistant Staphylococcus aureus (MRSA) has intensified efforts towards discovery and development of novel therapeutics. Marine macroalgae like Ulva lactuca are increasingly recognized as potential sources of antimicrobials, but the efficacy of U. lactuca extracts against common, virulent strains of Staph. aureus is poorly understood. We demonstrate that U. lactuca methanolic extracts inhibit a variety of clinically relevant Staphylococcus strains, and that the antimicrobial activity can be maximized by optimizing time of algal harvest. These findings provide potentially useful parameters for future work of isolating and identifying novel antimicrobial agents from macroalgae. © 2016 The Society for Applied Microbiology.

  11. Analysis of Sea Level Rise in Action

    NASA Astrophysics Data System (ADS)

    Gill, K. M.; Huang, T.; Quach, N. T.; Boening, C.

    2016-12-01

    NASA's Sea Level Change Portal provides scientists and the general public with "one-stop" source for current sea level change information and data. Sea Level Rise research is a multidisciplinary research and in order to understand its causes, scientists must be able to access different measurements and to be able to compare them. The portal includes an interactive tool, called the Data Analysis Tool (DAT), for accessing, visualizing, and analyzing observations and models relevant to the study of Sea Level Rise. Using NEXUS, an open source, big data analytic technology developed at the Jet Propulsion Laboratory, the DAT is able provide user on-the-fly data analysis on all relevant parameters. DAT is composed of three major components: A dedicated instance of OnEarth (a WMTS service), NEXUS deep data analytic platform, and the JPL Common Mapping Client (CMC) for web browser based user interface (UI). Utilizing the global imagery, a user is capable of browsing the data in a visual manner and isolate areas of interest for further study. The interfaces "Analysis" tool provides tools for area or point selection, single and/or comparative dataset selection, and a range of options, algorithms, and plotting. This analysis component utilizes the Nexus cloud computing platform to provide on-demand processing of the data within the user-selected parameters and immediate display of the results. A RESTful web API is exposed for users comfortable with other interfaces and who may want to take advantage of the cloud computing capabilities. This talk discuss how DAT enables on-the-fly sea level research. The talk will introduce the DAT with an end-to-end tour of the tool with exploration and animating of available imagery, a demonstration of comparative analysis and plotting, and how to share and export data along with images for use in publications/presentations. The session will cover what kind of data is available, what kind of analysis is possible, and what are the outputs.

  12. Uncertainty for calculating transport on Titan: A probabilistic description of bimolecular diffusion parameters

    NASA Astrophysics Data System (ADS)

    Plessis, S.; McDougall, D.; Mandt, K.; Greathouse, T.; Luspay-Kuti, A.

    2015-11-01

    Bimolecular diffusion coefficients are important parameters used by atmospheric models to calculate altitude profiles of minor constituents in an atmosphere. Unfortunately, laboratory measurements of these coefficients were never conducted at temperature conditions relevant to the atmosphere of Titan. Here we conduct a detailed uncertainty analysis of the bimolecular diffusion coefficient parameters as applied to Titan's upper atmosphere to provide a better understanding of the impact of uncertainty for this parameter on models. Because temperature and pressure conditions are much lower than the laboratory conditions in which bimolecular diffusion parameters were measured, we apply a Bayesian framework, a problem-agnostic framework, to determine parameter estimates and associated uncertainties. We solve the Bayesian calibration problem using the open-source QUESO library which also performs a propagation of uncertainties in the calibrated parameters to temperature and pressure conditions observed in Titan's upper atmosphere. Our results show that, after propagating uncertainty through the Massman model, the uncertainty in molecular diffusion is highly correlated to temperature and we observe no noticeable correlation with pressure. We propagate the calibrated molecular diffusion estimate and associated uncertainty to obtain an estimate with uncertainty due to bimolecular diffusion for the methane molar fraction as a function of altitude. Results show that the uncertainty in methane abundance due to molecular diffusion is in general small compared to eddy diffusion and the chemical kinetics description. However, methane abundance is most sensitive to uncertainty in molecular diffusion above 1200 km where the errors are nontrivial and could have important implications for scientific research based on diffusion models in this altitude range.

  13. Implementation and application of an interactive user-friendly validation software for RADIANCE

    NASA Astrophysics Data System (ADS)

    Sundaram, Anand; Boonn, William W.; Kim, Woojin; Cook, Tessa S.

    2012-02-01

    RADIANCE extracts CT dose parameters from dose sheets using optical character recognition and stores the data in a relational database. To facilitate validation of RADIANCE's performance, a simple user interface was initially implemented and about 300 records were evaluated. Here, we extend this interface to achieve a wider variety of functions and perform a larger-scale validation. The validator uses some data from the RADIANCE database to prepopulate quality-testing fields, such as correspondence between calculated and reported total dose-length product. The interface also displays relevant parameters from the DICOM headers. A total of 5,098 dose sheets were used to test the performance accuracy of RADIANCE in dose data extraction. Several search criteria were implemented. All records were searchable by accession number, study date, or dose parameters beyond chosen thresholds. Validated records were searchable according to additional criteria from validation inputs. An error rate of 0.303% was demonstrated in the validation. Dose monitoring is increasingly important and RADIANCE provides an open-source solution with a high level of accuracy. The RADIANCE validator has been updated to enable users to test the integrity of their installation and verify that their dose monitoring is accurate and effective.

  14. Challenges and opportunities in laboratory plasma astrophysics

    NASA Astrophysics Data System (ADS)

    Drake, R. Paul

    2017-06-01

    We are in a period of explosive success and opportunity in the laboratory study of plasma phenomena that are relevant to astrophysics. In this talk I will share with you several areas in which recent work, often foreshadowed 20 or 30 years ago, has produced dramatic initial success with prospects for much more. To begin, the talk will provide a brief look at the types of devices used and the regimes they access, showing how they span many orders of magnitude in parameters of interest. It will then illustrate the types of work one can do with laboratory plasmas that are relevant to astrophysics, which range from direct measurement of material properties to the production of scaled models of certain dynamics to the pursuit of complementary understanding. Examples will be drawn from the flow of energy and momentum in astrophysics, the formation and structure of astrophysical systems, and magnetization and its consequences. I hope to include some discussion of collisionless shocks, very dense plasmas, work relevant to the end of the Dark Ages, reconnection, and dynamos. The talk will conclude by highlighting some topics where it seems that we may be on the verge of exciting new progress.The originators of work discussed, and collaborators and funding sources when appropriate, will be included in the talk.

  15. Optimal antibunching in passive photonic devices based on coupled nonlinear resonators

    NASA Astrophysics Data System (ADS)

    Ferretti, S.; Savona, V.; Gerace, D.

    2013-02-01

    We propose the use of weakly nonlinear passive materials for prospective applications in integrated quantum photonics. It is shown that strong enhancement of native optical nonlinearities by electromagnetic field confinement in photonic crystal resonators can lead to single-photon generation only exploiting the quantum interference of two coupled modes and the effect of photon blockade under resonant coherent driving. For realistic system parameters in state of the art microcavities, the efficiency of such a single-photon source is theoretically characterized by means of the second-order correlation function at zero-time delay as the main figure of merit, where major sources of loss and decoherence are taken into account within a standard master equation treatment. These results could stimulate the realization of integrated quantum photonic devices based on non-resonant material media, fully integrable with current semiconductor technology and matching the relevant telecom band operational wavelengths, as an alternative to single-photon nonlinear devices based on cavity quantum electrodynamics with artificial atoms or single atomic-like emitters.

  16. History of Science and Conceptual Change: The Formation of Shadows by Extended Light Sources

    NASA Astrophysics Data System (ADS)

    Dedes, Christos; Ravanis, Konstantinos

    2009-09-01

    This study investigates the effectiveness of a teaching conflict procedure whose purpose was the transformation of the representations of 12-16-year-old pupils in Greece concerning light emission and shadow formation by extended light sources. The changes observed during the children’s effort to destabilize and reorganise their representations towards a model that was compatible with the respective scientific model were studied using three groups of pupils belonging to different age groups. The methodological plan implemented was based on input from the History of Science, while the parameters of the geometrical optics model were derived from Kepler’s relevant historic experiment. The effectiveness of the teaching procedure was evaluated 2 weeks after the intervention. The results showed that the majority of the subjects accepted the model of geometrical optics, i.e. the pupils were able to correctly predict and adequately justify the experimental results based on the principle of punctiform light emission. Educational and research implications are discussed.

  17. Impact of the Test Device on the Behavior of the Acoustic Emission Signals: Contribution of the Numerical Modeling to Signal Processing

    NASA Astrophysics Data System (ADS)

    Issiaka Traore, Oumar; Cristini, Paul; Favretto-Cristini, Nathalie; Pantera, Laurent; Viguier-Pla, Sylvie

    2018-01-01

    In a context of nuclear safety experiment monitoring with the non destructive testing method of acoustic emission, we study the impact of the test device on the interpretation of the recorded physical signals by using spectral finite element modeling. The numerical results are validated by comparison with real acoustic emission data obtained from previous experiments. The results show that several parameters can have significant impacts on acoustic wave propagation and then on the interpretation of the physical signals. The potential position of the source mechanism, the positions of the receivers and the nature of the coolant fluid have to be taken into account in the definition a pre-processing strategy of the real acoustic emission signals. In order to show the relevance of such an approach, we use the results to propose an optimization of the positions of the acoustic emission sensors in order to reduce the estimation bias of the time-delay and then improve the localization of the source mechanisms.

  18. Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods

    NASA Astrophysics Data System (ADS)

    Aaronson, Neil L.

    This dissertation deals with questions important to the problem of human sound source localization in rooms, starting with perceptual studies and moving on to physical measurements made in rooms. In Chapter 1, a perceptual study is performed relevant to a specific phenomenon the effect of speech reflections occurring in the front-back dimension and the ability of humans to segregate that from unreflected speech. Distracters were presented from the same source as the target speech, a loudspeaker directly in front of the listener, and also from a loudspeaker directly behind the listener, delayed relative to the front loudspeaker. Steps were taken to minimize the contributions of binaural difference cues. For all delays within +/-32 ms, a release from informational masking of about 2 dB occurred. This suggested that human listeners are able to segregate speech sources based on spatial cues, even with minimal binaural cues. In moving on to physical measurements in rooms, a method was sought for simultaneous measurement of room characteristics such as impulse response (IR) and reverberation time (RT60), and binaural parameters such as interaural time difference (ITD), interaural level difference (ILD), and the interaural cross-correlation function and coherence. Chapter 2 involves investigations into the usefulness of maximum length sequences (MLS) for these purposes. Comparisons to random telegraph noise (RTN) show that MLS performs better in the measurement of stationary and room transfer functions, IR, and RT60 by an order of magnitude in RMS percent error, even after Wiener filtering and exponential time-domain filtering have improved the accuracy of RTN measurements. Measurements were taken in real rooms in an effort to understand how the reverberant characteristics of rooms affect binaural parameters important to sound source localization. Chapter 3 deals with interaural coherence, a parameter important for localization and perception of auditory source width. MLS were used to measure waveform and envelope coherences in two rooms for various source distances and 0° azimuth through a head-and-torso simulator (KEMAR). A relationship is sought that relates these two types of coherence, since envelope coherence, while an important quantity, is generally less accessible than waveform coherence. A power law relationship is shown to exist between the two that works well within and across bands, for any source distance, and is robust to reverberant conditions of the room. Measurements of ITD, ILD, and coherence in rooms give insight into the way rooms affect these parameters, and in turn, the ability of listeners to localize sounds in rooms. Such measurements, along with room properties, are made and analyzed using MLS methods in Chapter 4. It was found that the pinnae cause incoherence for sound sources incident between 30° and 90°. In human listeners, this does not seem to adversely affect performance in lateralization experiments. The cause of poor coherence in rooms was studied as part of Chapter 4 as well. It was found that rooms affect coherence by introducing variance into the ITD spectra within the bands in which it is measured. A mathematical model to predict the interaural coherence within a band given the standard deviation of the ITD spectrum and the center frequency of the band gives an exponential relationship. This is found to work well in predicting measured coherence given ITD spectrum variance. The pinnae seem to affect the ITD spectrum in a similar way at incident sound angles for which coherence is poor in an anechoic environment.

  19. EEG source space analysis of the supervised factor analytic approach for the classification of multi-directional arm movement

    NASA Astrophysics Data System (ADS)

    Shenoy Handiru, Vikram; Vinod, A. P.; Guan, Cuntai

    2017-08-01

    Objective. In electroencephalography (EEG)-based brain-computer interface (BCI) systems for motor control tasks the conventional practice is to decode motor intentions by using scalp EEG. However, scalp EEG only reveals certain limited information about the complex tasks of movement with a higher degree of freedom. Therefore, our objective is to investigate the effectiveness of source-space EEG in extracting relevant features that discriminate arm movement in multiple directions. Approach. We have proposed a novel feature extraction algorithm based on supervised factor analysis that models the data from source-space EEG. To this end, we computed the features from the source dipoles confined to Brodmann areas of interest (BA4a, BA4p and BA6). Further, we embedded class-wise labels of multi-direction (multi-class) source-space EEG to an unsupervised factor analysis to make it into a supervised learning method. Main Results. Our approach provided an average decoding accuracy of 71% for the classification of hand movement in four orthogonal directions, that is significantly higher (>10%) than the classification accuracy obtained using state-of-the-art spatial pattern features in sensor space. Also, the group analysis on the spectral characteristics of source-space EEG indicates that the slow cortical potentials from a set of cortical source dipoles reveal discriminative information regarding the movement parameter, direction. Significance. This study presents evidence that low-frequency components in the source space play an important role in movement kinematics, and thus it may lead to new strategies for BCI-based neurorehabilitation.

  20. Uncertainties in modelling CH4 emissions from northern wetlands in glacial climates: the role of vegetation parameters

    NASA Astrophysics Data System (ADS)

    Berrittella, C.; van Huissteden, J.

    2011-10-01

    Marine Isotope Stage 3 (MIS 3) interstadials are marked by a sharp increase in the atmospheric methane (CH4) concentration, as recorded in ice cores. Wetlands are assumed to be the major source of this CH4, although several other hypotheses have been advanced. Modelling of CH4 emissions is crucial to quantify CH4 sources for past climates. Vegetation effects are generally highly generalized in modelling past and present-day CH4 fluxes, but should not be neglected. Plants strongly affect the soil-atmosphere exchange of CH4 and the net primary production of the vegetation supplies organic matter as substrate for methanogens. For modelling past CH4 fluxes from northern wetlands, assumptions on vegetation are highly relevant since paleobotanical data indicate large differences in Last Glacial (LG) wetland vegetation composition as compared to modern wetland vegetation. Besides more cold-adapted vegetation, Sphagnum mosses appear to be much less dominant during large parts of the LG than at present, which particularly affects CH4 oxidation and transport. To evaluate the effect of vegetation parameters, we used the PEATLAND-VU wetland CO2/CH4 model to simulate emissions from wetlands in continental Europe during LG and modern climates. We tested the effect of parameters influencing oxidation during plant transport (fox), vegetation net primary production (NPP, parameter symbol Pmax), plant transport rate (Vtransp), maximum rooting depth (Zroot) and root exudation rate (fex). Our model results show that modelled CH4 fluxes are sensitive to fox and Zroot in particular. The effects of Pmax, Vtransp and fex are of lesser relevance. Interactions with water table modelling are significant for Vtransp. We conducted experiments with different wetland vegetation types for Marine Isotope Stage 3 (MIS 3) stadial and interstadial climates and the present-day climate, by coupling PEATLAND-VU to high resolution climate model simulations for Europe. Experiments assuming dominance of one vegetation type (Sphagnum vs. Carex vs. Shrubs) show that Carex-dominated vegetation can increase CH4 emissions by 50% to 78% over Sphagnum-dominated vegetation depending on the modelled climate, while for shrubs this increase ranges from 42% to 72%. Consequently, during the LG northern wetlands may have had CH4 emissions similar to their present-day counterparts, despite a colder climate. Changes in dominant wetland vegetation, therefore, may drive changes in wetland CH4 fluxes, in the past as well as in the future.

  1. An update of Leighton's solar dynamo model

    NASA Astrophysics Data System (ADS)

    Cameron, R. H.; Schüssler, M.

    2017-03-01

    In 1969, Leighton developed a quasi-1D mathematical model of the solar dynamo, building upon the phenomenological scenario of Babcock published in 1961. Here we present a modification and extension of Leighton's model. Using the axisymmetric component (longitudinal average) of the magnetic field, we consider the radial field component at the solar surface and the radially integrated toroidal magnetic flux in the convection zone, both as functions of latitude. No assumptions are made with regard to the radial location of the toroidal flux. The model includes the effects of (I) turbulent diffusion at the surface and in the convection zone; (II) poleward meridional flow at the surface and an equatorward return flow affecting the toroidal flux; (III) latitudinal differential rotation and the near-surface layer of radial rotational shear; (iv) downward convective pumping of magnetic flux in the shear layer; and (v) flux emergence in the form of tilted bipolar magnetic regions treated as a source term for the radial surface field. While the parameters relevant for the transport of the surface field are taken from observations, the model condenses the unknown properties of magnetic field and flow in the convection zone into a few free parameters (turbulent diffusivity, effective return flow, amplitude of the source term, and a parameter describing the effective radial shear). Comparison with the results of 2D flux transport dynamo codes shows that the model captures the essential features of these simulations. We make use of the computational efficiency of the model to carry out an extended parameter study. We cover an extended domain of the 4D parameter space and identify the parameter ranges that provide solar-like solutions. Dipole parity is always preferred and solutions with periods around 22 yr and a correct phase difference between flux emergence in low latitudes and the strength of the polar fields are found for a return flow speed around 2 m s-1, turbulent diffusivity below about 80 km2s-1, and dynamo excitation not too far above the threshold (linear growth rate less than 0.1 yr-1).

  2. Sources of uncertanity as a basis to fill the information gap in a response to flood

    NASA Astrophysics Data System (ADS)

    Kekez, Toni; Knezic, Snjezana

    2016-04-01

    Taking into account uncertainties in flood risk management remains a challenge due to difficulties in choosing adequate structural and/or non-structural risk management options. Despite stated measures wrong decisions are often being made when flood occurs. Parameter and structural uncertainties which include model and observation errors as well as lack of knowledge about system characteristics are the main considerations. Real time flood risk assessment methods are predominantly based on measured water level values and vulnerability as well as other relevant characteristics of flood affected area. The goal of this research is to identify sources of uncertainties and to minimize information gap between the point where the water level is measured and the affected area, taking into consideration main uncertainties that can affect risk value at the observed point or section of the river. Sources of uncertainties are identified and determined using system analysis approach and relevant uncertainties are included in the risk assessment model. With such methodological approach it is possible to increase response time with more effective risk assessment which includes uncertainty propagation model. Response phase could be better planned with adequate early warning systems resulting in more time and less costs to help affected areas and save human lives. Reliable and precise information is necessary to raise emergency operability level in order to enhance safety of citizens and reducing possible damage. The results of the EPISECC (EU funded FP7) project are used to validate potential benefits of this research in order to improve flood risk management and response methods. EPISECC aims at developing a concept of a common European Information Space for disaster response which, among other disasters, considers the floods.

  3. Lateral Viscosity Variations in the Both Local and Global and Viscoelastic Load Response and it's Uncertainty

    NASA Astrophysics Data System (ADS)

    Ivins, E. R.; Caron, L.; Adhikari, S.; Larour, E. Y.; Seroussi, H. L.; Wiens, D.; Lloyd, A. J.; Dietrich, R. O. R.; Richter, A.

    2017-12-01

    One aspect of GIA modeling that has been a source of contention for many years is the exploration, or lack thereof, of the parameters representing growth and collapse of ice loading while additionally allowing mantle structure to vary. These problems are today being approached with advanced coupled solid earth and ice sheet continuum mechanics. An additional source of non-uniqueness lies in the potential for large (4 orders of magnitude) variability in mantle creep strength. A main question that remains is how to seek some simplification of the set of problems that this implies and to shed from consideration those questions that lack relevance to properly interpreting geodetic data sets. Answering this question therefore entails defining what science questions are to be addressed and to define what parameters produce the highest sensitivities. Where mantle viscosity and lithospheric thickness have affinity with an active dynamic mantle that brings rejuvenation by upwelling of volatiles and heat, the time scales for ice and water loading shorten. Here we show how seismic images map with constitutive flow laws into effective laterally varying viscosity maps. As important, we map the uncertainties. In turn, these uncertainties also inform the time scales that are sensitive to load reconstruction for computing present-day deformation and gravity. We employ the wavelength-dependent viscoelastic response decay spectra derived from analytic solutions in order to quantitatively map these sensitivities.

  4. An Experimental Investigation of Unsteady Thrust Augmentation Using a Speaker-Driven Jet

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Wernet, Mark P.; John, Wentworth T.

    2004-01-01

    An experimental investigation is described in which a simple speaker-driven jet was used as a pulsed thrust source (driver) for an ejector configuration. The objectives of the investigation were twofold: first, to add to the experimental body of evidence showing that an unsteady thrust source, combined with a properly sized ejector generally yields higher thrust augmentation values than a similarly sized, steady driver of equivalent thrust. Second, to identify characteristics of the unsteady driver that may be useful for sizing ejectors, and predicting what thrust augmentation values may be achieved. The speaker-driven jet provided a convenient source for the investigation because it is entirely unsteady (having no mean component) and because relevant parameters such as frequency, time-averaged thrust, and diameter are easily variable. The experimental setup will be described, as will the various measurements made. These include both thrust and Digital Particle Imaging Velocimetry of the driver. It will be shown that thrust augmentation values as high as 1.8 were obtained, that the diameter of the best ejector scaled with the dimensions of the emitted vortex, and that the so-called Formation Number serves as a useful dimensionless number by which to characterize the jet and predict performance.

  5. Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment

    NASA Astrophysics Data System (ADS)

    Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.

    2007-05-01

    A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.

  6. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  7. Enhancement of X-ray dose absorption for medical applications

    NASA Astrophysics Data System (ADS)

    Lim, Sara; Nahar, S.; Pradhan, A.; Barth, R.

    2013-05-01

    A promising technique for cancer treatment is radiation therapy with high-Z (HZ) nanomoities acting as radio-sensitizers attached to tumor cells and irradiated with X-rays. But the efficacy of radiosenstization is highly energy dependent. We study the physical effects in using platinum (Pt) as the radio-sensitizing agent, coupled with commonly employed broadband x-ray sources with mean energies around 100 keV, as opposed to MeV energies produced by clinical linear accelerators (LINAC) used in radiation therapy. Numerical calculations, in vitro, and in vivo studies of F98 rat glioma (brain cancer) demonstrate that irradiation from a medium energy X-ray (MEX) 160 kV source is far more effective than from a high energy x-ray (HEX) 6 MV LINAC. We define a parameter to quantify photoionization by an x-ray source, which thereby provides a measure of subsequent Auger decays. The platinum (Z = 78) results are also relevant to ongoing studies on x-ray interaction with gold (Z = 79) nanoparticles, widely studied as an HZ contrast agent. The present study should be of additional interest for a combined radiation plus chemotherapy treatment since Pt compounds such cis-Pt and carbo-Pt are commonly used in chemotherapy.

  8. Physical transformations of iron oxide and silver nanoparticles from an intermediate scale field transport study

    NASA Astrophysics Data System (ADS)

    Emerson, Hilary P.; Hart, Ashley E.; Baldwin, Jonathon A.; Waterhouse, Tyler C.; Kitchens, Christopher L.; Mefford, O. Thompson; Powell, Brian A.

    2014-02-01

    In recent years, there has been increasing concern regarding the fate and transport of engineered nanoparticles (NPs) in environmental systems and the potential impacts on human and environmental health due to the exponential increase in commercial and industrial use worldwide. To date, there have been relatively few field-scale studies or laboratory-based studies on environmentally relevant soils examining the chemical/physical behavior of the NPs following release into natural systems. The objective of this research is to demonstrate the behavior and transformations of iron oxide and silver NPs with different capping ligands within the unsaturated zone. Here, we show that NP transport within the vadose zone is minimal primarily due to heteroaggregation with soil surface coatings with results that >99 % of the NPs remained within 5 cm of the original source after 1 year in intermediate-scale field lysimeters. These results suggest that transport may be overestimated when compared to previous laboratory-scale studies on pristine soils and pure minerals and that future work must incorporate more environmentally relevant parameters.

  9. Investigations on caesium-free alternatives for H{sup −} formation at ion source relevant parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurutz, U.; Fantz, U.; AG Experimentelle Plasmaphysik, Institut für Physik, Universität Augsburg, 86135 Augsburg

    2015-04-08

    Negative hydrogen ions are efficiently produced in ion sources by the application of caesium. Due to a thereby induced lowering of the work function of a converter surface a direct conversion of impinging hydrogen atoms and positive ions into negative ions is maintained. However, due to the complex caesium chemistry and dynamics a long-term behaviour is inherent for the application of caesium that affects the stability and reliability of negative ion sources. To overcome these drawbacks caesium-free alternatives for efficient negative ion formation are investigated at the flexible laboratory setup HOMER (HOMogenous Electron cyclotron Resonance plasma). By the usage ofmore » a meshed grid the tandem principle is applied allowing for investigations on material induced negative ion formation under plasma parameters relevant for ion source operation. The effect of different sample materials on the ratio of the negative ion density to the electron density n{sub H{sup −}} /n{sub e} is compared to the effect of a stainless steel reference sample and investigated by means of laser photodetachment in a pressure range from 0.3 to 3 Pa. For the stainless steel sample no surface induced effect on the negative ion density is present and the measured negative ion densities are resulting from pure volume formation and destruction processes. In a first step the dependency of n{sub H{sup −}} /n{sub e} on the sample distance has been investigated for a caesiated stainless steel sample. At a distance of 0.5 cm at 0.3 Pa the density ratio is 3 times enhanced compared to the reference sample confirming the surface production of negative ions. In contrast for the caesium-free material samples, tantalum and tungsten, the same dependency on pressure and distance n{sub H{sup −}} /n{sub e} like for the stainless steel reference sample were obtained within the error margins: A density ratio of around 14.5% is measured at 4.5 cm sample distance and 0.3 Pa, linearly decreasing with decreasing distance to 7% at 1.5 cm. Thus, tantalum and tungsten do not significantly affect the negative ion density. First measurements conducted with LaB{sub 6} as well as with two types of diamond like carbon (DLC) n{sub H{sup −}} /n{sub e} of about 15% at 1 Pa were measured, which is comparable to the density ratio obtained for the stainless steel reference sample. At HOMER a surface induced enhancement of n{sub H{sup −}} is only observed when it exceeds the volume formation of H{sup −} which is also realistic for negative hydrogen ion sources.« less

  10. Nonlinear flow model of multiple fractured horizontal wells with stimulated reservoir volume including the quadratic gradient term

    NASA Astrophysics Data System (ADS)

    Ren, Junjie; Guo, Ping

    2017-11-01

    The real fluid flow in porous media is consistent with the mass conservation which can be described by the nonlinear governing equation including the quadratic gradient term (QGT). However, most of the flow models have been established by ignoring the QGT and little work has been conducted to incorporate the QGT into the flow model of the multiple fractured horizontal (MFH) well with stimulated reservoir volume (SRV). This paper first establishes a semi-analytical model of an MFH well with SRV including the QGT. Introducing the transformed pressure and flow-rate function, the nonlinear model of a point source in a composite system including the QGT is linearized. Then the Laplace transform, principle of superposition, numerical discrete method, Gaussian elimination method and Stehfest numerical inversion are employed to establish and solve the seepage model of the MFH well with SRV. Type curves are plotted and the effects of relevant parameters are analyzed. It is found that the nonlinear effect caused by the QGT can increase the flow capacity of fluid flow and influence the transient pressure positively. The relevant parameters not only have an effect on the type curve but also affect the error in the pressure calculated by the conventional linear model. The proposed model, which is consistent with the mass conservation, reflects the nonlinear process of the real fluid flow, and thus it can be used to obtain more accurate transient pressure of an MFH well with SRV.

  11. Preliminary Result of Earthquake Source Parameters the Mw 3.4 at 23:22:47 IWST, August 21, 2004, Centre Java, Indonesia Based on MERAMEX Project

    NASA Astrophysics Data System (ADS)

    Laksono, Y. A.; Brotopuspito, K. S.; Suryanto, W.; Widodo; Wardah, R. A.; Rudianto, I.

    2018-03-01

    In order to study the structure subsurface at Merapi Lawu anomaly (MLA) using forward modelling or full waveform inversion, it needs a good earthquake source parameters. The best result source parameter comes from seismogram with high signal to noise ratio (SNR). Beside that the source must be near the MLA location and the stations that used as parameters must be outside from MLA in order to avoid anomaly. At first the seismograms are processed by software SEISAN v10 using a few stations from MERAMEX project. After we found the hypocentre that match the criterion we fine-tuned the source parameters using more stations. Based on seismogram from 21 stations, it is obtained the source parameters as follows: the event is at August, 21 2004, on 23:22:47 Indonesia western standard time (IWST), epicentre coordinate -7.80°S, 101.34°E, hypocentre 47.3 km, dominant frequency f0 = 3.0 Hz, the earthquake magnitude Mw = 3.4.

  12. White LED compared with other light sources: age-dependent photobiological effects and parameters for evaluation.

    PubMed

    Rebec, Katja Malovrh; Klanjšek-Gunde, Marta; Bizjak, Grega; Kobav, Matej B

    2015-01-01

    Ergonomic science at work and living places should appraise human factors concerning the photobiological effects of lighting. Thorough knowledge on this subject has been gained in the past; however, few attempts have been made to propose suitable evaluation parameters. The blue light hazard and its influence on melatonin secretion in age-dependent observers is considered in this paper and parameters for its evaluation are proposed. New parameters were applied to analyse the effects of white light-emitting diode (LED) light sources and to compare them with the currently applied light sources. The photobiological effects of light sources with the same illuminance but different spectral power distribution were determined for healthy 4-76-year-old observers. The suitability of new parameters is discussed. Correlated colour temperature, the only parameter currently used to assess photobiological effects, is evaluated and compared to new parameters.

  13. Citation parameters of contact lens-related articles published in the ophthalmic literature.

    PubMed

    Cardona, Genís; Sanz, Joan P

    2014-09-01

    This study aimed at exploring the citation parameters of contact lenses articles published in the Ophthalmology thematic category of the Journal Citation Reports (JCR). The Thompson Reuters Web of Science database was accessed to record bibliometric information and citation parameters of all journals listed under the Ophthalmology area of the 2011 JCR edition, including the journals with main publication interests in the contact lens field. In addition, the same database was used to unveil all contact lens-related articles published in 2011 in the same thematic area, whereupon differences in citation parameters between those articles published in contact lens and non-contact lens-related journals were explored. Significant differences in some bibliometric indicators such as half-life and overall citation count were found between contact lens-related journals (shorter half-life and fewer citations) and the median values for the Ophthalmology thematic area of the JCR. Visual examination of all Ophthalmology journals uncovered a total of 156 contact lens-related articles, published in 28 different journals, with 27 articles each for Contact Lens & Anterior Eye, Eye & Contact Lens, and Optometry and Vision Science. Significant differences in citation parameters were encountered between those articles published in contact lens and non-contact lens source journals. These findings, which disclosed contact lenses to be a fertile area of research, may be of interest to researchers and institutions. Differences in bibliometric indicators are of relevance to avoid unwanted bias when conducting between- and within-discipline comparisons of articles, journals, and researchers.

  14. Advanced Compton scattering light source R&D at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, F; Anderson, S G; Anderson, G

    2010-02-16

    We report the design and current status of a monoenergetic laser-based Compton scattering 0.5-2.5 MeV {gamma}-ray source. Previous nuclear resonance fluorescence results and future linac and laser developments for the source are presented. At MeV photon energies relevant for nuclear processes, Compton scattering light sources are attractive because of their relative compactness and improved brightness above 100 keV, compared to typical 4th generation synchrotrons. Recent progress in accelerator physics and laser technology have enabled the development of a new class of tunable Mono-Energetic Gamma-Ray (MEGa-Ray) light sources based on Compton scattering between a high-brightness, relativistic electron beam and a highmore » intensity laser pulse produced via chirped-pulse amplification (CPA). A new precision, tunable gamma-ray source driven by a compact, high-gradient X-band linac is currently under development and construction at LLNL. High-brightness, relativistic electron bunches produced by an X-band linac designed in collaboration with SLAC will interact with a Joule-class, 10 ps, diode-pumped CPA laser pulse to generate tunable {gamma}-rays in the 0.5-2.5 MeV photon energy range via Compton scattering. Based on the success of the previous Thomson-Radiated Extreme X-rays (T-REX) Compton scattering source at LLNL, the source will be used to excite nuclear resonance fluorescence lines in various isotopes; applications include homeland security, stockpile science and surveillance, nuclear fuel assay, and waste imaging and assay. After a brief presentation of successful nuclear resonance fluorescence (NRF) experiments done with T-REX, the new source design, key parameters, and current status are presented.« less

  15. Stress-based animal models of depression: Do we actually know what we are doing?

    PubMed

    Yin, Xin; Guven, Nuri; Dietis, Nikolas

    2016-12-01

    Depression is one of the leading causes of disability and a significant health-concern worldwide. Much of our current understanding on the pathogenesis of depression and the pharmacology of antidepressant drugs is based on pre-clinical models. Three of the most popular stress-based rodent models are the forced swimming test, the chronic mild stress paradigm and the learned helplessness model. Despite their recognizable advantages and limitations, they are associated with an immense variability due to the high number of design parameters that define them. Only few studies have reported how minor modifications of these parameters affect the model phenotype. Thus, the existing variability in how these models are used has been a strong barrier for drug development as well as benchmark and evaluation of these pre-clinical models of depression. It also has been the source of confusing variability in the experimental outcomes between research groups using the same models. In this review, we summarize the known variability in the experimental protocols, identify the main and relevant parameters for each model and describe the variable values using characteristic examples. Our view of depression and our efforts to discover novel and effective antidepressants is largely based on our detailed knowledge of these testing paradigms, and requires a sound understanding around the importance of individual parameters to optimize and improve these pre-clinical models. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Marine cycling of the climate relevant trace gases carbonyl sulfide (OCS) and carbon disulfide (CS2) in the Peruvian upwelling regime

    NASA Astrophysics Data System (ADS)

    Lennartz, Sinikka; von Hobe, Marc; Booge, Dennis; Gonçalves-Araujo, Rafael; Bracher, Astrid; Röttgers, Rüdiger; Ksionzek, Kerstin B.; Koch, Boris P.; Fischer, Tim; Bittig, Henry; Quack, Birgit; Krüger, Kirstin; Marandino, Christa A.

    2017-04-01

    The ocean is a major source for the climate relevant trace gases carbonyl sulfide (OCS) and carbon disulfide (CS2). While the greenhouse gas CS2 quickly oxidizes to OCS in the atmosphere, the atmospheric lifetime of OCS of 2-7 years leads to an accumulation of this gas and makes it the most abundant reduced sulfur compound in the atmosphere. OCS has a counteracting effect on the climate: in the troposphere, it acts as a greenhouse gas causing warming, whereas it also sustains the stratospheric aerosol layer, and thus increases Earth's albedo causing cooling. To better constrain the important oceanic source of these trace gases, the marine cycling needs to be well understood and quantified. For OCS, the production and consumption processes are identified, but photoproduction and light-independent production rates remain to be quantified across different regions. In contrast, the processes that influence the oceanic cycling of CS2 are less well understood. Here we present new data from a cruise to the Peruvian upwelling regime and relate measurements of OCS and CS2 to key parameters, such as dissolved organic sulfur, chromophoric and fluorescent dissolved organic matter. We use a 1D water column model to further constrain their production and degradation rates. A focus is set on the influence of oxygen on the marine cycling of these two gases in oxygen depleted zones in the ocean, which are expected to expand in the future.

  17. Aerosol physicochemical properties in relation to meteorology: Case studies in urban, marine, and arid settings

    NASA Astrophysics Data System (ADS)

    Wonaschuetz, Anna

    Atmospheric aerosols are a highly relevant component of the climate system affecting atmospheric radiative transfer and the hydrological cycle. As opposed to other key atmospheric constituents with climatic relevance, atmospheric aerosol particles are highly heterogeneous in time and space with respect to their size, concentration, chemical composition and physical properties. Many aspects of their life cycle are not understood, making them difficult to represent in climate models and hard to control as a pollutant. Aerosol-cloud interactions in particular are infamous as a major source of uncertainty in future climate predictions. Field measurements are an important source of information for the modeling community and can lead to a better understanding of chemical and microphysical processes. In this study, field data from urban, marine, and arid settings are analyzed and the impact of meteorological conditions on the evolution of aerosol particles while in the atmosphere is investigated. Particular attention is given to organic aerosols, which are a poorly understood component of atmospheric aerosols. Local wind characteristics, solar radiation, relative humidity and the presence or absence of clouds and fog are found to be crucial factors in the transport and chemical evolution of aerosol particles. Organic aerosols in particular are found to be heavily impacted by processes in the liquid phase (cloud droplets and aerosol water). The reported measurements serve to improve the process-level understanding of aerosol evolution in different environments and to inform the modeling community by providing realistic values for input parameters and validation of model calculations.

  18. What Does a Verbal Test Measure? A New Approach to Understanding Sources of Item Difficulty.

    ERIC Educational Resources Information Center

    Berk, Eric J. Vanden; Lohman, David F.; Cassata, Jennifer Coyne

    Assessing the construct relevance of mental test results continues to present many challenges, and it has proven to be particularly difficult to assess the construct relevance of verbal items. This study was conducted to gain a better understanding of the conceptual sources of verbal item difficulty using a unique approach that integrates…

  19. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  20. Principles of parametric estimation in modeling language competition

    PubMed Central

    Zhang, Menghan; Gong, Tao

    2013-01-01

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678

  1. Principles of parametric estimation in modeling language competition.

    PubMed

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  2. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial

    PubMed Central

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2016-01-01

    Introduction Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. Material and methods In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. Results The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p < 0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p < 0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Conclusions Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. PMID:23394741

  3. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial.

    PubMed

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2013-07-01

    Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p<0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p<0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. The INAF/IAPS Plasma Chamber for ionospheric simulation experiment

    NASA Astrophysics Data System (ADS)

    Diego, Piero

    2016-04-01

    The plasma chamber is particularly suitable to perform studies for the following applications: - plasma compatibility and functional tests on payloads envisioned to operate in the ionosphere (e.g. sensors onboard satellites, exposed to the external plasma environment); - calibration/testing of plasma diagnostic sensors; - characterization and compatibility tests on components for space applications (e.g. optical elements, harness, satellite paints, photo-voltaic cells, etc.); - experiments on satellite charging in a space plasma environment; - tests on active experiments which use ion, electron or plasma sources (ion thrusters, hollow cathodes, field effect emitters, plasma contactors, etc.); - possible studies relevant to fundamental space plasma physics. The facility consists of a large volume vacuum tank (a cylinder of length 4.5 m and diameter 1.7 m) equipped with a Kaufman type plasma source, operating with Argon gas, capable to generate a plasma beam with parameters (i.e. density and electron temperature) close to the values encountered in the ionosphere at F layer altitudes. The plasma beam (A+ ions and electrons) is accelerated into the chamber at a velocity that reproduces the relative motion between an orbiting satellite and the ionosphere (≈ 8 km/s). This feature, in particular, allows laboratory simulations of the actual compression and depletion phenomena which take place in the ram and wake regions around satellites moving through the ionosphere. The reproduced plasma environment is monitored using Langmuir Probes (LP) and Retarding Potential Analyzers (RPA). These sensors can be automatically moved within the experimental space using a sled mechanism. Such a feature allows the acquisition of the plasma parameters all around the space payload installed into the chamber for testing. The facility is currently in use to test the payloads of CSES satellite (Chinese Seismic Electromagnetic Satellite) devoted to plasma parameters and electric field measurements in a polar orbit at 500 km altitude.

  5. Comparative evaluation of topographical data of dental implant surfaces applying optical interferometry and scanning electron microscopy.

    PubMed

    Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F

    2017-08-01

    Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  6. Stabilometric parameters are affected by anthropometry and foot placement.

    PubMed

    Chiari, Lorenzo; Rocchi, Laura; Cappello, Angelo

    2002-01-01

    To recognize and quantify the influence of biomechanical factors, namely anthropometry and foot placement, on the more common measures of stabilometric performance, including new-generation stochastic parameters. Fifty normal-bodied young adults were selected in order to cover a sufficiently wide range of anthropometric properties. They were allowed to choose their preferred side-by-side foot position and their quiet stance was recorded with eyes open and closed by a force platform. biomechanical factors are known to influence postural stability but their impact on stabilometric parameters has not been extensively explored yet. Principal component analysis was used for feature selection among several biomechanical factors. A collection of 55 stabilometric parameters from the literature was estimated from the center-of-pressure time series. Linear relations between stabilometric parameters and selected biomechanical factors were investigated by robust regression techniques. The feature selection process returned height, weight, maximum foot width, base-of-support area, and foot opening angle as the relevant biomechanical variables. Only eleven out of the 55 stabilometric parameters were completely immune from a linear dependence on these variables. The remaining parameters showed a moderate to high dependence that was strengthened upon eye closure. For these parameters, a normalization procedure was proposed, to remove what can well be considered, in clinical investigations, a spurious source of between-subject variability. Care should be taken when quantifying postural sway through stabilometric parameters. It is suggested as a good practice to include some anthropometric measurements in the experimental protocol, and to standardize or trace foot position. Although the role of anthropometry and foot placement has been investigated in specific studies, there are no studies in the literature that systematically explore the relationship between such BF and stabilometric parameters. This knowledge may contribute to better defining the experimental protocol and improving the functional evaluation of postural sway for clinical purposes, e.g. by removing through normalization the spurious effects of body properties and foot position on postural performance.

  7. Impact of various operating modes on performance and emission parameters of small heat source

    NASA Astrophysics Data System (ADS)

    Vician, Peter; Holubčík, Michal; Palacka, Matej; Jandačka, Jozef

    2016-06-01

    Thesis deals with the measurement of performance and emission parameters of small heat source for combustion of biomass in each of its operating modes. As the heat source was used pellet boiler with an output of 18 kW. The work includes design of experimental device for measuring the impact of changes in air supply and method for controlling the power and emission parameters of heat sources for combustion of woody biomass. The work describes the main factors that affect the combustion process and analyze the measurements of emissions at the heat source. The results of experiment demonstrate the values of performance and emissions parameters for the different operating modes of the boiler, which serve as a decisive factor in choosing the appropriate mode.

  8. Thermal State-of-Charge in Solar Heat Receivers

    NASA Technical Reports Server (NTRS)

    Hall, Carsie A., Jr.; Glakpe, Emmanuel K.; Cannon, Joseph N.; Kerslake, Thomas W.

    1998-01-01

    A theoretical framework is developed to determine the so-called thermal state-of-charge (SOC) in solar heat receivers employing encapsulated phase change materials (PCMS) that undergo cyclic melting and freezing. The present problem is relevant to space solar dynamic power systems that would typically operate in low-Earth-orbit (LEO). The solar heat receiver is integrated into a closed-cycle Brayton engine that produces electric power during sunlight and eclipse periods of the orbit cycle. The concepts of available power and virtual source temperature, both on a finite-time basis, are used as the basis for determining the SOC. Analytic expressions for the available power crossing the aperture plane of the receiver, available power stored in the receiver, and available power delivered to the working fluid are derived, all of which are related to the SOC through measurable parameters. Lower and upper bounds on the SOC are proposed in order to delineate absolute limiting cases for a range of input parameters (orbital, geometric, etc.). SOC characterization is also performed in the subcooled, two-phase, and superheat regimes. Finally, a previously-developed physical and numerical model of the solar heat receiver component of NASA Lewis Research Center's Ground Test Demonstration (GTD) system is used in order to predict the SOC as a function of measurable parameters.

  9. Disentangling the adult attention-deficit hyperactivity disorder endophenotype: parametric measurement of attention.

    PubMed

    Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina

    2011-11-01

    Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.

  10. Visualizing spatial correlation: structural and electronic orders in iron-based superconductors on atomic scale

    NASA Astrophysics Data System (ADS)

    Maksov, Artem; Ziatdinov, Maxim; Li, Li; Sefat, Athena; Maksymovych, Petro; Kalinin, Sergei

    Crystalline matter on the nanoscale level often exhibits strongly inhomogeneous structural and electronic orders, which have a profound effect on macroscopic properties. This may be caused by subtle interplay between chemical disorder, strain, magnetic, and structural order parameters. We present a novel approach based on combination of high resolution scanning tunneling microscopy/spectroscopy (STM/S) and deep data style analysis for automatic separation, extraction, and correlation of structural and electronic behavior which might lead us to uncovering the underlying sources of inhomogeneity in in iron-based family of superconductors (FeSe, BaFe2As2) . We identify STS spectral features using physically robust Bayesian linear unmixing, and show their direct relevance to the fundamental physical properties of the system, including electronic states associated with individual defects and impurities. We collect structural data from individual unit cells on the crystalline lattice, and calculate both global and local indicators of spatial correlation with electronic features, demonstrating, for the first time, a direct quantifiable connection between observed structural order parameters extracted from the STM data and electronic order parameters identified within the STS data. This research was sponsored by the Division of Materials Sciences and Engineering, Office of Science, Basic Energy Sciences, US DOE.

  11. Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data.

    PubMed Central

    Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu

    2002-01-01

    Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences. PMID:12136032

  12. Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data.

    PubMed

    Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu

    2002-07-01

    Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences.

  13. The influence of petroleum products on the methane fermentation process.

    PubMed

    Choromański, Paweł; Karwowska, Ewa; Łebkowska, Maria

    2016-01-15

    In this study the influence of the petroleum products: diesel fuel and spent engine oil on the sewage sludge digestion process and biogas production efficiency was investigated. Microbiological, chemical and enzymatic analyses were applied in the survey. It was revealed that the influence of the petroleum derivatives on the effectiveness of the methane fermentation of sewage sludge depends on the type of the petroleum product. Diesel fuel did not limit the biogas production and the methane concentration in the biogas, while spent engine oil significantly reduced the process efficacy. The changes in physical-chemical parameters, excluding COD, did not reflect the effect of the tested substances. The negative influence of petroleum products on individual bacterial groups was observed after 7 days of the process, while after 14 days probably some adaptive mechanisms appeared. The dehydrogenase activity assessment was the most relevant parameter to evaluate the effect of petroleum products contamination. Diesel fuel was probably used as a source of carbon and energy in the process, while the toxic influence was observed in case of spent engine oil. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Optical and positron annihilation spectroscopic studies on PMMA polymer doped by rhodamine B/chloranilic acid charge transfer complex: Special relevance to the effect of γ-ray irradiation

    NASA Astrophysics Data System (ADS)

    Hassan, H. E.; Refat, Moamen S.; Sharshar, T.

    2016-04-01

    Polymeric sheets of poly (methylmethaclyerate) (PMMA) containing charge transfer (CT) complex of rhodamine B/chloranilic acid (Rho B/CHA) were synthesized in methanol solvent at room temperature. The systematic analysis done on the Rho B and its CT complex in the form of powder or polymeric sheets confirmed their structure and thermal stability. The IR spectra interpreted the charge transfer mode of interaction between the CHA central positions and the terminal carboxylic group. The polymer sheets were irradiated with 70 kGy of γ radiation using 60Co source to study the enhanced changes in the structure and optical parameters. The microstructure changes of the PMMA sheets caused by γ-ray irradiation were analyzed using positron annihilation lifetime (PAL) and positron annihilation Doppler broadening (PADB) techniques. The positron life time components (τi) and their corresponding intensities (Ii) as well as PADB line-shape parameters (S and W) were found to be highly sensitive to the enhanced disorder occurred in the organic chains of the polymeric sheets due to γ-irradiation.

  15. A study on the seismic source parameters for earthquakes occurring in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Rhee, H. M.; Sheen, D. H.

    2015-12-01

    We investigated the characteristics of the seismic source parameters of the southern part of the Korean Peninsula for the 599 events with ML≥1.7 from 2001 to 2014. A large number of data are carefully selected by visual inspection in the time and frequency domains. The data set consist of 5,093 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. The corner frequency, stress drop, and moment magnitude of each event were measured by using the modified method of Jo and Baag (2001), based on the methods of Snoke (1987) and Andrews (1986). We found that this method could improve the stability of the estimation of source parameters from S-wave displacement spectrum by an iterative process. Then, we compared the source parameters with those obtained from previous studies and investigated the source scaling relationship and the regional variations of source parameters in the southern Korean Peninsula.

  16. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  17. [Clinical relevance of periodic limb movements during sleep in obstructive sleep apnea patients].

    PubMed

    Iriarte, J; Alegre, M; Irimia, P; Urriza, J; Artieda, J

    The periodic limb movements disorder (PLMD) is frequently associated with the obstructive sleep apnea syndrome (OSAS), but the prevalence and clinical relevance of this association have not been studied in detail. The objectives were to make a prospective study on the prevalence of PLMD in patients with OSAS, and correlate this association with clinical and respiratory parameters. Forty-two patients diagnosed with OSAS, without clinical suspicion of PLMD, underwent a polysomnographic study. Clinical symptoms and signs were evaluated with an structured questionnaire, and respiratory parameters were obtained from the nocturnal study. Periodic limb movements were found in 10 patients (24%). There were no differences in clinical parameters between both groups (with and without periodical limb movements). However, respiratory parameters were significantly worse in patients without PLMD. PLMD is very frequent in patients with OSAS, and can contribute to worsen clinical signs and symptoms in these patients independently from respiratory parameters.

  18. Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca

    2017-11-01

    Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.

  19. Endogenous Magnetic Reconnection in Solar Coronal Loops

    NASA Astrophysics Data System (ADS)

    Asgari-Targhi, M.; Coppi, B.; Basu, B.; Fletcher, A.; Golub, L.

    2017-12-01

    We propose that a magneto-thermal reconnection process occurring in coronal loops be the source of the heating of the Solar Corona [1]. In the adopted model, magnetic reconnection is associated with electron temperature gradients, anisotropic electron temperature fluctuations and plasma current density gradients [2]. The input parameters for our theoretical model are derived from the most recent observations of the Solar Corona. In addition, the relevant (endogenous) collective modes can produce high energy particle populations. An endogenous reconnection process is defined as being driven by factors internal to the region where reconnection takes place. *Sponsored in part by the U.S. D.O.E. and the Kavli Foundation* [1] Beafume, P., Coppi, B. and Golub, L., (1992) Ap. J. 393, 396. [2] Coppi, B. and Basu, B. (2017) MIT-LNS Report HEP 17/01.

  20. DEVELOPMENT OF TITANIUM NITRIDE COATING FOR SNS RING VACUUM CHAMBERS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HE,P.; HSEUH,H.C.; MAPES,M.

    2001-06-18

    The inner surface of the ring vacuum chambers of the US Spallation Neutron Source (SNS) will be coated with {approximately}100 nm of Titanium Nitride (TiN). This is to minimize the secondary electron yield (SEY) from the chamber wall, and thus avoid the so-called e-p instability caused by electron multipacting as observed in a few high-intensity proton storage rings. Both DC sputtering and DC-magnetron sputtering were conducted in a test chamber of relevant geometry to SNS ring vacuum chambers. Auger Electron Spectroscopy (AES) and Rutherford Back Scattering (RBS) were used to analyze the coatings for thickness, stoichiometry and impurity. Excellent resultsmore » were obtained with magnetron sputtering. The development of the parameters for the coating process and the surface analysis results are presented.« less

  1. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.

  2. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  3. A preliminary assessment of geologic framework and sediment thickness studies relevant to prospective US submission on extended continental shelf

    USGS Publications Warehouse

    Hutchinson, Deborah R.; Childs, Jonathan R.; Hammar-Klose, Erika; Dadisman, Shawn; Edgar, N. Terrence; Barth, Ginger A.

    2004-01-01

    Under the provisions of Articles 76 and 77 of the United Nations Convention on the Law of the Sea (UNCLOS), coastal States have sovereign rights over the continental shelf territory beyond 200-nautical mile (nm) from the baseline from which the territorial sea is measured if certain conditions are met regarding the geologic and physiographic character of the legal continental shelf as defined in those articles. These claims to an extended continental shelf must be supported by relevant bathymetric, geophysical and geological data according to guidelines established by the Commission on the Limits of the Continental Shelf (CLCS, 1999). In anticipation of the United States becoming party to UNCLOS, Congress in 2001 directed the Joint Hydrographic Center/Center for Coastal and Ocean Mapping at the University of New Hampshire to conduct a study to evaluate data relevant to establishing the outer limit of the juridical continental shelf beyond 200 nm and to recommend what additional data might be needed to substantiate such an outer limit (Mayer and others, 2002). The resulting report produced an impressive and sophisticated GIS database of data sources. Because of the short time allowed to complete the report, all seismic reflection data were classified together; the authors therefore recommended that USGS perform additional analysis on seismic and related data holdings. The results of this additional analysis are the substance of this report, including the status of geologic framework, sediment isopach research, and resource potential in the eight regions1 identified by Mayer and others (2002) where analysis of seismic data might be crucial for establishing an outer limit . Seismic reflection and refraction data are essential in determining sediment thickness, one of the criteria used in establishing the outer limits of the juridical continental shelf. Accordingly, the initial task has been to inventory public-domain seismic data sources, primarily those regionally extensive data held within the Department of the Interior (DOI). The numerous seismic reflection and refraction surveys collected prior to 1970 by academic and governmental institutions are generally not included in this compilation, except where they provide unique data in a region. These data sources were omitted from this report because they were deemed to be of insufficient quality (poorly navigated or low resolution) to meet the CLCS standards for a submission, or they were redundant with higher-quality, more modern data. Hence, this report attempts to identify those data sets of highest utility for establishing the outer limits of the juridical continental shelf. If there was any ambiguity or uncertainty about the relevance of a data set to a continental shelf submission, either by its quality, location, or other parameter, it was included in this compilation. This report does not summarize other geophysical data (such as marine magnetics or gravity) that might be relevant to understanding crustal provenance and geological continuity. Detailed metadata tables and maps are included to facilitate the location and utilization of these sources when a comprehensive assessment (?desktop study?) is undertaken.

  4. Analogical and category-based inference: a theoretical integration with Bayesian causal models.

    PubMed

    Holyoak, Keith J; Lee, Hee Seung; Lu, Hongjing

    2010-11-01

    A fundamental issue for theories of human induction is to specify constraints on potential inferences. For inferences based on shared category membership, an analogy, and/or a relational schema, it appears that the basic goal of induction is to make accurate and goal-relevant inferences that are sensitive to uncertainty. People can use source information at various levels of abstraction (including both specific instances and more general categories), coupled with prior causal knowledge, to build a causal model for a target situation, which in turn constrains inferences about the target. We propose a computational theory in the framework of Bayesian inference and test its predictions (parameter-free for the cases we consider) in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes. The theory proved successful in accounting for systematic patterns of judgments about interrelated types of causal inferences, including evidence that analogical inferences are partially dissociable from overall mapping quality.

  5. Hydrodynamical and Spectral Simulations of HMXB Winds

    NASA Astrophysics Data System (ADS)

    Mauche, Christopher W.; Liedahl, D. A.; Plewa, T.

    2006-09-01

    We describe the results of a research program to develop improved models of the X-ray spectra of cosmic sources such as X-ray binaries, CVs, and AGN in which UV line-driven mass flows are photoionized by an X-ray source. Work to date has focused on high-mass X-ray binaries (HMXBs) and on Vela X-1 in particular, for which there are high-quality Chandra HETG spectra in the archive. Our research program combines FLASH hydrodynamic calculations, XSTAR photoionization calculations, HULLAC atomic data, improved calculations of the line force multiplier, X-ray emission models appropriate to X-ray photoionized plasmas, and Monte Carlo radiation transport. We will present movies of the relevant physical quantities (density, temperature, ionization parameter, velocity) from a FLASH two-dimensional time-dependent simulation of Vela X-1, maps showing the emissivity distributions of the X-ray emission lines, and a preliminary comparison of the resulting synthetic spectra to the Chandra HETG spectra. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  6. Emittance and lifetime measurement with damping wigglers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, G. M.; Shaftan, T., E-mail: shaftan@bnl.gov; Cheng, W. X.

    National Synchrotron Light Source II (NSLS-II) is a new third-generation storage ring light source at Brookhaven National Laboratory. The storage ring design calls for small horizontal emittance (<1 nm-rad) and diffraction-limited vertical emittance at 12 keV (8 pm-rad). Achieving low value of the beam size will enable novel user experiments with nm-range spatial and meV-energy resolution. The high-brightness NSLS-II lattice has been realized by implementing 30-cell double bend achromatic cells producing the horizontal emittance of 2 nm rad and then halving it further by using several Damping Wigglers (DWs). This paper is focused on characterization of the DW effects inmore » the storage ring performance, namely, on reduction of the beam emittance, and corresponding changes in the energy spread and beam lifetime. The relevant beam parameters have been measured by the X-ray pinhole camera, beam position monitors, beam filling pattern monitor, and current transformers. In this paper, we compare the measured results of the beam performance with analytic estimates for the complement of the 3 DWs installed at the NSLS-II.« less

  7. Beam-Plasma Interaction Experiments on the Princeton Advanced Test Stand

    NASA Astrophysics Data System (ADS)

    Stepanov, A.; Gilson, E. P.; Grisham, L.; Kaganovich, I. D.; Davidson, R. C.

    2011-10-01

    The Princeton Advanced Test Stand (PATS) is a compact experimental facility for studying the fundamental physics of intense beam-plasma interactions relevant to the Neutralized Drift Compression Experiment - II (NDCX-II). The PATS facility consists of a 100 keV ion beam source mounted on a six-foot-long vacuum chamber with numerous ports for diagnostic access. A 100 keV Ar+ beam is launched into a volumetric plasma, which is produced by a ferroelectric plasma source (FEPS). Beam diagnostics upstream and downstream of the FEPS allow for detailed studies of the effects that the plasma has on the beam. This setup is designed for studying the dependence of charge and current neutralization and beam emittance growth on the beam and plasma parameters. This work reports initial measurements of beam quality produced by the extraction electrodes that were recently installed on the PATS device. The transverse beam phase space is measured with double-slit emittance scanners, and the experimental results are compared to WARP simulations of the extraction system. This research is supported by the U.S. Department of Energy.

  8. Investigation of soft component in cosmic ray detection

    NASA Astrophysics Data System (ADS)

    Oláh, László; Varga, Dezső

    2017-07-01

    Cosmic ray detection is a research area which finds various applications in tomographic imaging of large size objects. In such applications, the background sources which contaminate cosmic muon signal require a good understanding of the creation processes, as well as reliable simulation frameworks with high predictive power are needed. One of the main background source is the ;soft component;, that is electrons and positrons. In this paper a simulation framework based on GEANT4 has been established to pin down the key features of the soft component. We have found that the electron and positron flux shows a remarkable invariance against various model parameters including the muon emission altitude or primary particle energy distribution. The correlation between simultaneously arriving particles have been quantitatively investigated, demonstrating that electrons and positrons tend to arrive within a close distance and with low relative angle. This feature, which is highly relevant for counting detectors, has been experimentally verified under open sky and at shallow depth underground. The simulation results have been compared to existing other measurements as well as other simulation programs.

  9. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  10. A simple distributed sediment delivery approach for rural catchments

    NASA Astrophysics Data System (ADS)

    Reid, Lucas; Scherer, Ulrike

    2014-05-01

    The transfer of sediments from source areas to surface waters is a complex process. In process based erosion models sediment input is thus quantified by representing all relevant sub processes such as detachment, transport and deposition of sediment particles along the flow path to the river. A successful application of these models requires, however, a large amount of spatially highly resolved data on physical catchment characteristics, which is only available for a few, well examined small catchments. For the lack of appropriate models, the empirical Universal Soil Loss Equation (USLE) is widely applied to quantify the sediment production in meso to large scale basins. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). In these models, the SDR is related to data on morphological characteristics of the catchment such as average local relief, drainage density, proportion of depressions or soil texture. Some approaches include the relative distance between sediment source areas and the river channels. However, several studies showed that spatially lumped parameters describing the morphological characteristics are only of limited value to represent the factors of influence on sediment transport at the catchment scale. Sediment delivery is controlled by the location of the sediment source areas in the catchment and the morphology along the flow path to the surface water bodies. This complex interaction of spatially varied physiographic characteristics cannot be adequately represented by lumped morphological parameters. The objective of this study is to develop a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in a catchment. We selected a small catchment located in in an intensively cultivated loess region in Southwest Germany as study area for the development of the SDR approach. The flow pathways were extracted in a geographic information system. Then the sediment delivery ratio for each source area was determined using an empirical approach considering the slope, morphology and land use properties along the flow path. As a benchmark for the calibration of the model parameters we used results of a detailed process based erosion model available for the study area. Afterwards the approach was tested in larger catchments located in the same loess region.

  11. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  12. Analysis and optimization of minor actinides transmutation blankets with regards to neutron and gamma sources

    NASA Astrophysics Data System (ADS)

    Kooymana, Timothée; Buiron, Laurent; Rimpault, Gérald

    2017-09-01

    Heterogeneous loading of minor actinides in radial blankets is a potential solution to implement minor actinides transmutation in fast reactors. However, to compensate for the lower flux level experienced by the blankets, the fraction of minor actinides to be loaded in the blankets must be increased to maintain acceptable performances. This severely increases the decay heat and neutron source of the blanket assemblies, both before and after irradiation, by more than an order of magnitude in the case of neutron source for instance. We propose here to implement an optimization methodology of the blankets design with regards to various parameters such as the local spectrum or the mass to be loaded, with the objective of minimizing the final neutron source of the spent assembly while maximizing the transmutation performances of the blankets. In a first stage, an analysis of the various contributors to long and short term neutron and gamma source is carried out while in a second stage, relevant estimators are designed for use in the effective optimization process, which is done in the last step. A comparison with core calculations is finally done for completeness and validation purposes. It is found that the use of a moderated spectrum in the blankets can be beneficial in terms of final neutron and gamma source without impacting minor actinides transmutation performances compared to more energetic spectrum that could be achieved using metallic fuel for instance. It is also confirmed that, if possible, the use of hydrides as moderating material in the blankets is a promising option to limit the total minor actinides inventory in the fuel cycle. If not, it appears that focus should be put upon an increased residence time for the blankets rather than an increase in the acceptable neutron source for handling and reprocessing.

  13. The Propagation of Cosmic Rays from the Galactic Wind Termination Shock: Back to the Galaxy?

    NASA Astrophysics Data System (ADS)

    Merten, Lukas; Bustard, Chad; Zweibel, Ellen G.; Becker Tjus, Julia

    2018-05-01

    Although several theories exist for the origin of cosmic rays (CRs) in the region between the spectral “knee” and “ankle,” this problem is still unsolved. A variety of observations suggest that the transition from Galactic to extragalactic sources occurs in this energy range. In this work, we examine whether a Galactic wind that eventually forms a termination shock far outside the Galactic plane can contribute as a possible source to the observed flux in the region of interest. Previous work by Bustard et al. estimated that particles can be accelerated to energies above the “knee” up to R max = 1016 eV for parameters drawn from a model of a Milky Way wind. A remaining question is whether the accelerated CRs can propagate back into the Galaxy. To answer this crucial question, we simulate the propagation of the CRs using the low-energy extension of the CRPropa framework, based on the solution of the transport equation via stochastic differential equations. The setup includes all relevant processes, including three-dimensional anisotropic spatial diffusion, advection, and corresponding adiabatic cooling. We find that, assuming realistic parameters for the shock evolution, a possible Galactic termination shock can contribute significantly to the energy budget in the “knee” region and above. We estimate the resulting produced neutrino fluxes and find them to be below measurements from IceCube and limits by KM3NeT.

  14. Target detection, shape discrimination, and signal characteristics of an echolocating false killer whale (Pseudorca crassidens).

    PubMed

    Brill, R L; Pawloski, J L; Helweg, D A; Au, W W; Moore, P W

    1992-09-01

    This study demonstrated the ability of a false killer whale (Pseudorca crassidens) to discriminate between two targets and investigated the parameters of the whale's emitted signals for changes related to test conditions. Target detection performance comparable to the bottlenose dolphin's (Tursiops truncatus) has previously been reported for echolocating false killer whales. No other echolocation capabilities have been reported. A false killer whale, naive to conditioned echolocation tasks, was initially trained to detect a cylinder in a "go/no-go" procedure over ranges of 3 to 8 m. The transition from a detection task to a discrimination task was readily achieved by introducing a spherical comparison target. Finally, the cylinder was successfully compared to spheres of two different sizes and target strengths. Multivariate analyses were used to evaluate the parameters of emitted signals. Duncan's multiple range tests showed significant decreases (df = 185, p less than 0.05) in both source level and bandwidth in the transition from detection to discrimination. Analysis of variance revealed a significant decrease in the number of clicks over test conditions [F(5.26) = 5.23, p less than 0.0001]. These data suggest that the whale relied on cues relevant to target shape as well as target strength, that changes in source level and bandwidth were task-related, that the decrease in clicks was associated with learning experience, and that Pseudorca's ability to discriminate shapes using echolocation may be comparable to that of Tursiops truncatus.

  15. Experimental analysis of the characteristics of artificial vocal folds.

    PubMed

    Misun, Vojtech; Svancara, Pavel; Vasek, Martin

    2011-05-01

    Specialized literature presents a number of models describing the function of the vocal folds. In most of those models, an emphasis is placed on the air flowing through the glottis and, further, on the effect of the parameters of the air alone (its mass, speed, and so forth). The article focuses on the constructional definition of artificial vocal folds and their experimental analysis. The analysis is conducted for voiced source voice phonation and for the changing mean value of the subglottal pressure. The article further deals with the analysis of the pressure of the airflow through the vocal folds, which is cut (separated) into individual pulses by the vibrating vocal folds. The analysis results show that air pulse characteristics are relevant to voice generation, as they are produced by the flowing air and vibrating vocal folds. A number of artificial vocal folds have been constructed to date, and the aforementioned view of their phonation is confirmed by their analysis. The experiments have confirmed that man is able to consciously affect only two parameters of the source voice, that is, its fundamental frequency and voice intensity. The main forces acting on the vocal folds during phonation are as follows: subglottal air pressure and elastic and inertia forces of the vocal folds' structure. The correctness of the function of the artificial vocal folds is documented by the experimental verification of the spectra of several types of artificial vocal folds. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  16. Probabilistic calibration of the SPITFIRE fire spread model using Earth observation data

    NASA Astrophysics Data System (ADS)

    Gomez-Dans, Jose; Wooster, Martin; Lewis, Philip; Spessa, Allan

    2010-05-01

    There is a great interest in understanding how fire affects vegetation distribution and dynamics in the context of global vegetation modelling. A way to include these effects is through the development of embedded fire spread models. However, fire is a complex phenomenon, thus difficult to model. Statistical models based on fire return intervals, or fire danger indices need large amounts of data for calibration, and are often prisoner to the epoch they were calibrated to. Mechanistic models, such as SPITFIRE, try to model the complete fire phenomenon based on simple physical rules, making these models mostly independent of calibration data. However, the processes expressed in models such as SPITFIRE require many parameters. These parametrisations are often reliant on site-specific experiments, or in some other cases, paremeters might not be measured directly. Additionally, in many cases, changes in temporal and/or spatial resolution result in parameters becoming effective. To address the difficulties with parametrisation and the often-used fitting methodologies, we propose using a probabilistic framework to calibrate some areas of the SPITFIRE fire spread model. We calibrate the model against Earth Observation (EO) data, a global and ever-expanding source of relevant data. We develop a methodology that tries to incorporate the limitations of the EO data, reasonable prior values for parameters and that results in distributions of parameters, which can be used to infer uncertainty due to parameter estimates. Additionally, the covariance structure of parameters and observations is also derived, whcih can help inform data gathering efforts and model development, respectively. For this work, we focus on Southern African savannas, an important ecosystem for fire studies, and one with a good amount of EO data relevnt to fire studies. As calibration datasets, we use burned area data, estimated number of fires and vegetation moisture dynamics.

  17. How robust are the natural history parameters used in chlamydia transmission dynamic models? A systematic review.

    PubMed

    Davies, Bethan; Anderson, Sarah-Jane; Turner, Katy M E; Ward, Helen

    2014-01-30

    Transmission dynamic models linked to economic analyses often form part of the decision making process when introducing new chlamydia screening interventions. Outputs from these transmission dynamic models can vary depending on the values of the parameters used to describe the infection. Therefore these values can have an important influence on policy and resource allocation. The risk of progression from infection to pelvic inflammatory disease has been extensively studied but the parameters which govern the transmission dynamics are frequently neglected. We conducted a systematic review of transmission dynamic models linked to economic analyses of chlamydia screening interventions to critically assess the source and variability of the proportion of infections that are asymptomatic, the duration of infection and the transmission probability. We identified nine relevant studies in Pubmed, Embase and the Cochrane database. We found that there is a wide variation in their natural history parameters, including an absolute difference in the proportion of asymptomatic infections of 25% in women and 75% in men, a six-fold difference in the duration of asymptomatic infection and a four-fold difference in the per act transmission probability. We consider that much of this variation can be explained by a lack of consensus in the literature. We found that a significant proportion of parameter values were referenced back to the early chlamydia literature, before the introduction of nucleic acid modes of diagnosis and the widespread testing of asymptomatic individuals. In conclusion, authors should use high quality contemporary evidence to inform their parameter values, clearly document their assumptions and make appropriate use of sensitivity analysis. This will help to make models more transparent and increase their utility to policy makers.

  18. Improvements in Calibration and Analysis of the CTBT-relevant Radioxenon Isotopes with High Resolution SiPIN-based Electron Detectors

    NASA Astrophysics Data System (ADS)

    Khrustalev, K.

    2016-12-01

    Current process for the calibration of the beta-gamma detectors used for radioxenon isotope measurements for CTBT purposes is laborious and time consuming. It uses a combination of point sources and gaseous sources resulting in differences between energy and resolution calibrations. The emergence of high resolution SiPIN based electron detectors allows improvements in the calibration and analysis process to be made. Thanks to high electron resolution of SiPIN detectors ( 8-9 keV@129 keV) compared to plastic scintillators ( 35 keV@129keV) there are a lot more CE peaks (from radioxenon and radon progenies) can be resolved and used for energy and resolution calibration in the energy range of the CTBT-relevant radioxenon isotopes. The long term stability of the SiPIN energy calibration allows one to significantly reduce the time of the QC measurements needed for checking the stability of the E/R calibration. The currently used second order polynomials for the E/R calibration fitting are unphysical and shall be replaced by a linear energy calibration for NaI and SiPIN, owing to high linearity and dynamic range of the modern digital DAQ systems, and resolution calibration functions shall be modified to reflect the underlying physical processes. Alternatively, one can completely abandon the use of fitting functions and use only point-values of E/R (similar to the efficiency calibration currently used) at the energies relevant for the isotopes of interest (ROI - Regions Of Interest ). Current analysis considers the detector as a set of single channel analysers, with an established set of coefficients relating the positions of ROIs with the positions of the QC peaks. The analysis of the spectra can be made more robust using peak and background fitting in the ROIs with a single free parameter (peak area) of the potential peaks from the known isotopes and a fixed E/R calibration values set.

  19. Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.

    PubMed

    Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M

    2010-01-01

    Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.

  20. Towards a generic procedure for the detection of relevant contaminants from waste electric and electronic equipment (WEEE) in plastic food-contact materials: a review and selection of key parameters.

    PubMed

    Puype, Franky; Samsonek, Jiří; Vilímková, Věra; Kopečková, Šárka; Ratiborská, Andrea; Knoop, Jan; Egelkraut-Holtus, Marion; Ortlieb, Markus; Oppermann, Uwe

    2017-10-01

    Recently, traces of brominated flame retardants (BFRs) have been detected in black plastic food-contact materials (FCMs), indicating the presence of recycled plastics, mainly coming from waste electric and electronic equipment (WEEE) as BFRs are one of the main additives in electric applications. In order to evaluate efficiently and preliminary in situ the presence of WEEE in plastic FCMs, a generic procedure for the evaluation of WEEE presence in plastic FCMs by using defined parameters having each an associated importance level has been proposed. This can be achieved by combining parameters like overall bromine (Br) and antimony (Sb) content; additive and reactive BFR, rare earth element (REE) and WEEE-relevant elemental content and additionally polymer purity. In most of the cases, the WEEE contamination could be confirmed by combining X-ray fluorescence (XRF) spectrometry and thermal desorption/pyrolysis gas chromatography-mass spectrometry (GC-MS) at first. The Sb and REE content did not give a full confirmation as to the source of contamination, however for Sb the opposite counts: Sb was joined with elevated Br signals. Therefore, Br at first followed by Sb were used as WEEE precursors as both elements are used as synergetic flame-retardant systems. WEEE-specific REEs could be used for small WEEE (sWEEE) confirmation; however, this parameter should be interpreted with care. The polymer purity by Fourier-transform infrared spectrometer (FTIR) and pyrolysis GC-MS in many cases could not confirm WEEE-specific contamination; however, it can be used for purity measurements and for the suspicion of the usage of recycled fractions (WEEE and non-WEEE) as a third-line confirmation. To the best of our knowledge, the addition of WEEE waste to plastic FCMs is illegal; however, due to lack on screening mechanisms, there is still the breakthrough of such articles onto the market, and, therefore, our generic procedure enables the quick and effective screening of suspicious samples.

  1. Medical Subject Headings (MeSH) for indexing and retrieving open-source healthcare data.

    PubMed

    Marc, David T; Khairat, Saif S

    2014-01-01

    The US federal government initiated the Open Government Directive where federal agencies are required to publish high value datasets so that they are available to the public. Data.gov and the community site Healthdata.gov were initiated to disperse such datasets. However, data searches and retrieval for these sites are keyword driven and severely limited in performance. The purpose of this paper is to address the issue of extracting relevant open-source data by proposing a method of adopting the MeSH framework for indexing and data retrieval. A pilot study was conducted to compare the performance of traditional keywords to MeSH terms for retrieving relevant open-source datasets related to "mortality". The MeSH framework resulted in greater sensitivity with comparable specificity to the keyword search. MeSH showed promise as a method for indexing and retrieving data, yet future research should conduct a larger scale evaluation of the performance of the MeSH framework for retrieving relevant open-source healthcare datasets.

  2. Physics Mining of Multi-Source Data Sets

    NASA Technical Reports Server (NTRS)

    Helly, John; Karimabadi, Homa; Sipes, Tamara

    2012-01-01

    Powerful new parallel data mining algorithms can produce diagnostic and prognostic numerical models and analyses from observational data. These techniques yield higher-resolution measures than ever before of environmental parameters by fusing synoptic imagery and time-series measurements. These techniques are general and relevant to observational data, including raster, vector, and scalar, and can be applied in all Earth- and environmental science domains. Because they can be highly automated and are parallel, they scale to large spatial domains and are well suited to change and gap detection. This makes it possible to analyze spatial and temporal gaps in information, and facilitates within-mission replanning to optimize the allocation of observational resources. The basis of the innovation is the extension of a recently developed set of algorithms packaged into MineTool to multi-variate time-series data. MineTool is unique in that it automates the various steps of the data mining process, thus making it amenable to autonomous analysis of large data sets. Unlike techniques such as Artificial Neural Nets, which yield a blackbox solution, MineTool's outcome is always an analytical model in parametric form that expresses the output in terms of the input variables. This has the advantage that the derived equation can then be used to gain insight into the physical relevance and relative importance of the parameters and coefficients in the model. This is referred to as physics-mining of data. The capabilities of MineTool are extended to include both supervised and unsupervised algorithms, handle multi-type data sets, and parallelize it.

  3. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude of the equivalent Wood-Anderson displacement recordings. The moment magnitude (Mw) is then estimated from the inversion of displacement spectra. The duration magnitude (Md) is rapidly computed, based on a simple and automatic measurement of the seismic wave coda duration. Starting from the magnitude estimates, other relevant pieces of information are also computed, such as the corner frequency, the seismic moment, the source radius and the seismic energy. The ground-shaking maps on a Google map are produced, for peak ground acceleration (PGA), peak ground velocity (PGV) and instrumental intensity (in SHAKEMAP® format), or a plot of the measured peak ground values. Furthermore, based on a specific decisional scheme, the automatic discrimination between local earthquakes occurred within the network and regional/teleseismic events occurred outside the network is performed. Finally, for largest events, if a consistent number of P-wave polarity reading are available, the focal mechanism is also computed. For each event, all of the available pieces of information are stored in a local database and the results of the automatic analyses are published on an interactive web page. "The Bulletin" shows a map with event location and stations, as well as a table listing all the events, with the associated parameters. The catalogue fields are the event ID, the origin date and time, latitude, longitude, depth, Ml, Mw, Md, the number of triggered stations, the S-displacement spectra, and shaking maps. Some of these entries also provide additional information, such as the focal mechanism (when available). The picked traces are uploaded in the database and from the web interface of the Bulletin the traces can be download for more specific analysis. This innovative software represents a smart solution, with a friendly and interactive interface, for high-level analysis of seismic data analysis and it may represent a relevant tool not only for seismologists, but also for non-expert external users who are interested in the seismological data. The software is a valid tool for the automatic analysis of the background seismicity at different time scales and can be a relevant tool for the monitoring of both natural and induced seismicity.

  4. A 2D forward and inverse code for streaming potential problems

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.

    2013-12-01

    The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.

  5. Toward uniform implementation of parametric map Digital Imaging and Communication in Medicine standard in multisite quantitative diffusion imaging studies.

    PubMed

    Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C

    2018-01-01

    This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.

  6. A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.

    2017-12-01

    Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.

  7. Multidimensional Characterization and Differentiation of Neurons in the Anteroventral Cochlear Nucleus

    PubMed Central

    Typlt, Marei; Englitz, Bernhard; Sonntag, Mandy; Dehmel, Susanne; Kopp-Scheinpflug, Cornelia; Ruebsamen, Rudolf

    2012-01-01

    Multiple parallel auditory pathways ascend from the cochlear nucleus. It is generally accepted that the origin of these pathways are distinct groups of neurons differing in their anatomical and physiological properties. In extracellular in vivo recordings these neurons are typically classified on the basis of their peri-stimulus time histogram. In the present study we reconsider the question of classification of neurons in the anteroventral cochlear nucleus (AVCN) by taking a wider range of response properties into account. The study aims at a better understanding of the AVCN's functional organization and its significance as the source of different ascending auditory pathways. The analyses were based on 223 neurons recorded in the AVCN of the Mongolian gerbil. The range of analysed parameters encompassed spontaneous activity, frequency coding, sound level coding, as well as temporal coding. In order to categorize the unit sample without any presumptions as to the relevance of certain response parameters, hierarchical cluster analysis and additional principal component analysis were employed which both allow a classification on the basis of a multitude of parameters simultaneously. Even with the presently considered wider range of parameters, high number of neurons and more advanced analytical methods, no clear boundaries emerged which would separate the neurons based on their physiology. At the current resolution of the analysis, we therefore conclude that the AVCN units more likely constitute a multi-dimensional continuum with different physiological characteristics manifested at different poles. However, more complex stimuli could be useful to uncover physiological differences in future studies. PMID:22253838

  8. Accessibility, nature and quality of health information on the Internet: a survey on osteoarthritis.

    PubMed

    Maloney, S; Ilic, D; Green, S

    2005-03-01

    This study aims to determine the quality and validity of information available on the Internet about osteoarthritis and to investigate the best way of sourcing this information. Keywords relevant to osteoarthritis were searched across 15 search engines representing medical, general and meta-search engines. Search engine efficiency was defined as the percentage of unique and relevant websites from all websites returned by each search engine. The quality of relevant information was appraised using the DISCERN tool and the concordance of the information offered by the website with the available evidence about osteoarthritis determined. A total of 3443 websites were retrieved, of which 344 were identified as unique and providing information relevant to osteoarthritis. The overall quality of website information was poor. There was no significant difference between types of search engine in sourcing relevant information; however, the information retrieved from medical search engines was of a higher quality. Fewer than a third of the websites identified as offering relevant information cited evidence to support their recommendations. Although the overall quality of website information about osteoarthritis was poor, medical search engines may provide consumers with the opportunity to source high-quality health information on the Internet. In the era of evidence-based medicine, one of the main obstacles to the Internet reaching its potential as a medical resource is the failure of websites to incorporate and attribute evidence-based information.

  9. Calibration of entrance dose measurement for an in vivo dosimetry programme.

    PubMed

    Ding, W; Patterson, W; Tremethick, L; Joseph, D

    1995-11-01

    An increasing number of cancer treatment centres are using in vivo dosimetry as a quality assurance tool for verifying dosimetry as either the entrance or exit surface of the patient undergoing external beam radiotherapy. Equipment is usually limited to either thermoluminescent dosimeters (TLD) or semiconductor detectors such as p-type diodes. The semiconductor detector is more popular than the TLD due to the major advantage of real time analysis of the actual dose delivered. If a discrepancy is observed between the calculated and the measured entrance dose, it is possible to eliminate several likely sources of errors by immediately verifying all treatment parameters. Five Scanditronix EDP-10 p-type diodes were investigated to determine their calibration and relevant correction factors for entrance dose measurements using a Victoreen White Water-RW3 tissue equivalent phantom and a 6 MV photon beam from a Varian Clinac 2100C linear accelerator. Correction factors were determined for individual diodes for the following parameters: source to surface distance (SSD), collimator size, wedge, plate (tray) and temperature. The directional dependence of diode response was also investigated. The SSD correction factor (CSSD) was found to increase by approximately 3% over the range of SSD from 80 to 130 cm. The correction factor for collimator size (Cfield) also varied by approximately 3% between 5 x 5 and 40 x 40 cm2. The wedge correction factor (Cwedge) and plate correction factor (Cplate) were found to be a function of collimator size. Over the range of measurement, these factors varied by a maximum of 1 and 1.5%, respectively. The Cplate variation between the solid and the drilled plates under the same irradiation conditions was a maximum of 2.4%. The diode sensitivity demonstrated an increase with temperature. A maximum of 2.5% variation for the directional dependence of diode response was observed for angle of +/- 60 degrees. In conclusion, in vivo dosimetry is an important and reliable method for checking the dose delivered to the patient. Preclinical calibration and determination of the relevant correction factors for each diode are essential in order to achieve a high accuracy of dose delivered to the patient.

  10. Quantitative interpretations of Visible-NIR reflectance spectra of blood.

    PubMed

    Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H

    2008-10-27

    This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.

  11. Band excitation method applicable to scanning probe microscopy

    DOEpatents

    Jesse, Stephen [Knoxville, TN; Kalinin, Sergei V [Knoxville, TN

    2010-08-17

    Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.

  12. Band excitation method applicable to scanning probe microscopy

    DOEpatents

    Jesse, Stephen; Kalinin, Sergei V

    2013-05-28

    Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.

  13. The C2HDM revisited

    NASA Astrophysics Data System (ADS)

    Fontes, Duarte; Mühlleitner, Margarete; Romão, Jorge C.; Santos, Rui; Silva, João P.; Wittbrodt, Jonas

    2018-02-01

    The complex two-Higgs doublet model is one of the simplest ways to extend the scalar sector of the Standard Model to include a new source of CP-violation. The model has been used as a benchmark model to search for CP-violation at the LHC and as a possible explanation for the matter-antimatter asymmetry of the Universe. In this work, we re-analyse in full detail the softly broken ℤ 2 symmetric complex two-Higgs doublet model (C2HDM). We provide the code C2HDM_HDECAY implementing the C2HDM in the well-known HDECAY program which calculates the decay widths including the state-of-the-art higher order QCD corrections and the relevant off-shell decays. Using C2HDM_HDECAY together with the most relevant theoretical and experimental constraints, including electric dipole moments (EDMs), we review the parameter space of the model and discuss its phenomenology. In particular, we find cases where large CP-odd couplings to fermions are still allowed and provide benchmark points for these scenarios. We examine the prospects of discovering CP-violation at the LHC and show how theoretically motivated measures of CP-violation correlate with observables.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas

    This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less

  15. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method

  16. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  17. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  18. Local Jurisdictions and Active Shooters: Building Networks, Building Capacities

    DTIC Science & Technology

    2010-12-01

    coordination will be the foundation for identifying relevant sources and materials on the armed active shooter assault. This research will also benefit...CONCLUSION In summary, the literature review identified relevant sources and materials on the importance of an armed attack. While an armed assault...armed with the following: dozens of explosive devices of varying potency, seven knives, two Savage-Stevens 12 gauge double- barrel shotguns with the

  19. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  20. Composite laminate failure parameter optimization through four-point flexure experimentation and analysis

    DOE PAGES

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    2016-05-06

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  1. Source parameters controlling the generation and propagation of potential local tsunamis along the cascadia margin

    USGS Publications Warehouse

    Geist, E.; Yoshioka, S.

    1996-01-01

    The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.

  2. About the Modeling of Radio Source Time Series as Linear Splines

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  3. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  4. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  5. Integration of relational and textual biomedical sources. A pilot experiment using a semi-automated method for logical schema acquisition.

    PubMed

    García-Remesal, M; Maojo, V; Billhardt, H; Crespo, J

    2010-01-01

    Bringing together structured and text-based sources is an exciting challenge for biomedical informaticians, since most relevant biomedical sources belong to one of these categories. In this paper we evaluate the feasibility of integrating relational and text-based biomedical sources using: i) an original logical schema acquisition method for textual databases developed by the authors, and ii) OntoFusion, a system originally designed by the authors for the integration of relational sources. We conducted an integration experiment involving a test set of seven differently structured sources covering the domain of genetic diseases. We used our logical schema acquisition method to generate schemas for all textual sources. The sources were integrated using the methods and tools provided by OntoFusion. The integration was validated using a test set of 500 queries. A panel of experts answered a questionnaire to evaluate i) the quality of the extracted schemas, ii) the query processing performance of the integrated set of sources, and iii) the relevance of the retrieved results. The results of the survey show that our method extracts coherent and representative logical schemas. Experts' feedback on the performance of the integrated system and the relevance of the retrieved results was also positive. Regarding the validation of the integration, the system successfully provided correct results for all queries in the test set. The results of the experiment suggest that text-based sources including a logical schema can be regarded as equivalent to structured databases. Using our method, previous research and existing tools designed for the integration of structured databases can be reused - possibly subject to minor modifications - to integrate differently structured sources.

  6. AQUATOX Data Sources Documents

    EPA Pesticide Factsheets

    Contains the data sources for parameter values of the AQUATOX model including: a bibliography for the AQUATOX data libraries and the compendia of parameter values for US Army Corps of Engineers models.

  7. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  8. The MED-SUV Multidisciplinary Interoperability Infrastructure

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; D'Auria, Luca; Reitano, Danilo; Papeschi, Fabrizio; Roncella, Roberto; Puglisi, Giuseppe; Nativi, Stefano

    2016-04-01

    In accordance with the international Supersite initiative concept, the MED-SUV (MEDiterranean SUpersite Volcanoes) European project (http://med-suv.eu/) aims to enable long-term monitoring experiment in two relevant geologically active regions of Europe prone to natural hazards: Mt. Vesuvio/Campi Flegrei and Mt. Etna. This objective requires the integration of existing components, such as monitoring systems and data bases and novel sensors for the measurements of volcanic parameters. Moreover, MED-SUV is also a direct contribution to the Global Earth Observation System of Systems (GEOSS) as one the volcano Supersites recognized by the Group on Earth Observation (GEO). To achieve its goal, MED-SUV set up an advanced e-infrastructure allowing the discovery of and access to heterogeneous data for multidisciplinary applications, and the integration with external systems like GEOSS. The MED-SUV overall infrastructure is conceived as a three layer architecture with the lower layer (Data level) including the identified relevant data sources, the mid-tier (Supersite level) including components for mediation and harmonization , and the upper tier (Global level) composed of the systems that MED-SUV must serve, such as GEOSS and possibly other global/community systems. The Data level is mostly composed of existing data sources, such as space agencies satellite data archives, the UNAVCO system, the INGV-Rome data service. They share data according to different specifications for metadata, data and service interfaces, and cannot be changed. Thus, the only relevant MED-SUV activity at this level was the creation of a MED-SUV local repository based on Web Accessible Folder (WAF) technology, deployed in the INGV site in Catania, and hosting in-situ data and products collected and generated during the project. The Supersite level is at the core of the MED-SUV architecture, since it must mediate between the disparate data sources in the layer below, and provide a harmonized view to the layer above. In order to address data and service heteogeneity, the MED-SUV infrastructure is based on the brokered architecture approach, implemented using the GI-suite Brokering Framework for discovery and access. The GI-Suite Brokering Framework has been extended and configured to broker all the identified relevant data sources. It is also able to publish data according to several de-iure and de-facto standards including OGC CSW and OpenSearch, facilitating the interconnection with external systems. At the Global level, MED-SUV identified the interconnection with GEOSS as the main requirement. Since MED-SUV Supersite level is implemented based on the same technology adopted in the current GEOSS Common Infrastructure (GCI) by the GEO Discovery and Access Broker (GEO DAB), no major interoperability problem is foreseen. The MED-SUV Multidisciplinary Interoperability Infrastructure is complemented by a user portal providing human-to-machine interaction, and enabling data discovery and access. The GI-Suite Brokering Framework APIs and javascript library support machine-to-machine interaction, enabling the creation of mobile and Web applications using information available through the MED-SUV Supersite.

  9. Earthquake source parameters determined by the SAFOD Pilot Hole seismic array

    USGS Publications Warehouse

    Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.

    2004-01-01

    We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.

  10. EDITORIAL: Interrelationship between plasma phenomena in the laboratory and in space

    NASA Astrophysics Data System (ADS)

    Koepke, Mark

    2008-07-01

    The premise of investigating basic plasma phenomena relevant to space is that an alliance exists between both basic plasma physicists, using theory, computer modelling and laboratory experiments, and space science experimenters, using different instruments, either flown on different spacecraft in various orbits or stationed on the ground. The intent of this special issue on interrelated phenomena in laboratory and space plasmas is to promote the interpretation of scientific results in a broader context by sharing data, methods, knowledge, perspectives, and reasoning within this alliance. The desired outcomes are practical theories, predictive models, and credible interpretations based on the findings and expertise available. Laboratory-experiment papers that explicitly address a specific space mission or a specific manifestation of a space-plasma phenomenon, space-observation papers that explicitly address a specific laboratory experiment or a specific laboratory result, and theory or modelling papers that explicitly address a connection between both laboratory and space investigations were encouraged. Attention was given to the utility of the references for readers who seek further background, examples, and details. With the advent of instrumented spacecraft, the observation of waves (fluctuations), wind (flows), and weather (dynamics) in space plasmas was approached within the framework provided by theory with intuition provided by the laboratory experiments. Ideas on parallel electric field, magnetic topology, inhomogeneity, and anisotropy have been refined substantially by laboratory experiments. Satellite and rocket observations, theory and simulations, and laboratory experiments have contributed to the revelation of a complex set of processes affecting the accelerations of electrons and ions in the geospace plasma. The processes range from meso-scale of several thousands of kilometers to micro-scale of a few meters to kilometers. Papers included in this special issue serve to synthesise our current understanding of processes related to the coupling and feedback at disparate scales. Categories of topics included here are (1) ionospheric physics and (2) Alfvén-wave physics, both of which are related to the particle acceleration responsible for auroral displays, (3) whistler-mode triggering mechanism, which is relevant to radiation-belt dynamics, (4) plasmoid encountering a barrier, which has applications throughout the realm of space and astrophysical plasmas, and (5) laboratory investigations of the entire magnetosphere or the plasma surrounding the magnetosphere. The papers are ordered from processes that take place nearest the Earth to processes that take place at increasing distances from Earth. Many advances in understanding space plasma phenomena have been linked to insight derived from theoretical modeling and/or laboratory experiments. Observations from space-borne instruments are typically interpreted using theoretical models developed to predict the properties and dynamics of space and astrophysical plasmas. The usefulness of customized laboratory experiments for providing confirmation of theory by identifying, isolating, and studying physical phenomena efficiently, quickly, and economically has been demonstrated in the past. The benefits of laboratory experiments to investigating space-plasma physics are their reproducibility, controllability, diagnosability, reconfigurability, and affordability compared to a satellite mission or rocket campaign. Certainly, the plasma being investigated in a laboratory device is quite different from that being measured by a spaceborne instrument; nevertheless, laboratory experiments discover unexpected phenomena, benchmark theoretical models, develop physical insight, establish observational signatures, and pioneer diagnostic techniques. Explicit reference to such beneficial laboratory contributions is occasionally left out of the citations in the space-physics literature in favor of theory-paper counterparts and, thus, the scientific support that laboratory results can provide to the development of space-relevant theoretical models is often under-recognized. It is unrealistic to expect the dimensional parameters corresponding to space plasma to be matchable in the laboratory. However, a laboratory experiment is considered well designed if the subset of parameters relevant to a specific process shares the same phenomenological regime as the subset of analogous space parameters, even if less important parameters are mismatched. Regime boundaries are assigned by normalizing a dimensional parameter to an appropriate reference or scale value to make it dimensionless and noting the values at which transitions occur in the physical behavior or approximations. An example of matching regimes for cold-plasma waves is finding a 45° diagonal line on the log--log CMA diagram along which lie both a laboratory-observed wave and a space-observed wave. In such a circumstance, a space plasma and a lab plasma will support the same kind of modes if the dimensionless parameters are scaled properly (Bellan 2006 Fundamentals of Plasma Physics (Cambridge: Cambridge University Press) p 227). The plasma source, configuration geometry, and boundary conditions associated with a specific laboratory experiment are characteristic elements that affect the plasma and plasma processes that are being investigated. Space plasma is not exempt from an analogous set of constraining factors that likewise influence the phenomena that occur. Typically, each morphologically distinct region of space has associated with it plasma that is unique by virtue of the various mechanisms responsible for the plasma's presence there, as if the plasma were produced by a unique source. Boundary effects that typically constrain the possible parameter values to lie within one or more restricted ranges are inescapable in laboratory plasma. The goal of a laboratory experiment is to examine the relevant physics within these ranges and extrapolate the results to space conditions that may or may not be subject to any restrictions on the values of the plasma parameters. The interrelationship between laboratory and space plasma experiments has been cultivated at a low level and the potential scientific benefit in this area has yet to be realized. The few but excellent examples of joint papers, joint experiments, and directly relevant cross-disciplinary citations are a direct result of the emphasis placed on this interrelationship two decades ago. Building on this special issue Plasma Physics and Controlled Fusion plans to create a dedicated webpage to highlight papers directly relevant to this field published either in the recent past or in the future. It is hoped that this resource will appeal to the readership in the laboratory-experiment and space-plasma communities and improve the cross-fertilization between them.

  11. Macular versus Retinal Nerve Fiber Layer Parameters for Diagnosing Manifest Glaucoma: A Systematic Review of Diagnostic Accuracy Studies.

    PubMed

    Oddone, Francesco; Lucenteforte, Ersilia; Michelessi, Manuele; Rizzo, Stanislao; Donati, Simone; Parravano, Mariacristina; Virgili, Gianni

    2016-05-01

    Macular parameters have been proposed as an alternative to retinal nerve fiber layer (RNFL) parameters to diagnose glaucoma. Comparing the diagnostic accuracy of macular parameters, specifically the ganglion cell complex (GCC) and ganglion cell inner plexiform layer (GCIPL), with the accuracy of RNFL parameters for detecting manifest glaucoma is important to guide clinical practice and future research. Studies using spectral domain optical coherence tomography (SD OCT) and reporting macular parameters were included if they allowed the extraction of accuracy data for diagnosing manifest glaucoma, as confirmed with automated perimetry or a clinician's optic nerve head (ONH) assessment. Cross-sectional cohort studies and case-control studies were included. The QUADAS 2 tool was used to assess methodological quality. Only direct comparisons of macular versus RNFL parameters (i.e., in the same study) were conducted. Summary sensitivity and specificity of each macular or RNFL parameter were reported, and the relative diagnostic odds ratio (DOR) was calculated in hierarchical summary receiver operating characteristic (HSROC) models to compare them. Thirty-four studies investigated macular parameters using RTVue OCT (Optovue Inc., Fremont, CA) (19 studies, 3094 subjects), Cirrus OCT (Carl Zeiss Meditec Inc., Dublin, CA) (14 studies, 2164 subjects), or 3D Topcon OCT (Topcon, Inc., Tokyo, Japan) (4 studies, 522 subjects). Thirty-two of these studies allowed comparisons between macular and RNFL parameters. Studies generally reported sensitivities at fixed specificities, more commonly 0.90 or 0.95, with sensitivities of most best-performing parameters between 0.65 and 0.75. For all OCT devices, compared with RNFL parameters, macular parameters were similarly or slightly less accurate for detecting glaucoma at the highest reported specificity, which was confirmed in analyses at the lowest specificity. Included studies suffered from limitations, especially the case-control study design, which is known to overestimate accuracy. However, this flaw is less relevant as a source of bias in direct comparisons conducted within studies. With the use of OCT, RNFL parameters are still preferable to macular parameters for diagnosing manifest glaucoma, but the differences are small. Because of high heterogeneity, direct comparative or randomized studies of OCT devices or OCT parameters and diagnostic strategies are essential. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  12. Retrieving relevant time-course experiments: a study on Arabidopsis microarrays.

    PubMed

    Şener, Duygu Dede; Oğul, Hasan

    2016-06-01

    Understanding time-course regulation of genes in response to a stimulus is a major concern in current systems biology. The problem is usually approached by computational methods to model the gene behaviour or its networked interactions with the others by a set of latent parameters. The model parameters can be estimated through a meta-analysis of available data obtained from other relevant experiments. The key question here is how to find the relevant experiments which are potentially useful in analysing current data. In this study, the authors address this problem in the context of time-course gene expression experiments from an information retrieval perspective. To this end, they introduce a computational framework that takes a time-course experiment as a query and reports a list of relevant experiments retrieved from a given repository. These retrieved experiments can then be used to associate the environmental factors of query experiment with the findings previously reported. The model is tested using a set of time-course Arabidopsis microarrays. The experimental results show that relevant experiments can be successfully retrieved based on content similarity.

  13. Characterisation of the physico-mechanical parameters of MSW.

    PubMed

    Stoltz, Guillaume; Gourc, Jean-Pierre; Oxarango, Laurent

    2010-01-01

    Following the basics of soil mechanics, the physico-mechanical behaviour of municipal solid waste (MSW) can be defined through constitutive relationships which are expressed with respect to three physical parameters: the dry density, the porosity and the gravimetric liquid content. In order to take into account the complexity of MSW (grain size distribution and heterogeneity larger than for conventional soils), a special oedometer was designed to carry out laboratory experiments. This apparatus allowed a coupled measurement of physical parameters for MSW settlement under stress. The studied material was a typical sample of fresh MSW from a French landfill. The relevant physical parameters were measured using a gas pycnometer. Moreover, the compressibility of MSW was studied with respect to the initial gravimetric liquid content. Proposed methods to assess the set of three physical parameters allow a relevant understanding of the physico-mechanical behaviour of MSW under compression, specifically, the evolution of the limit liquid content. The present method can be extended to any type of MSW. 2010 Elsevier Ltd. All rights reserved.

  14. An integral equation-based numerical solver for Taylor states in toroidal geometries

    NASA Astrophysics Data System (ADS)

    O'Neil, Michael; Cerfon, Antoine J.

    2018-04-01

    We present an algorithm for the numerical calculation of Taylor states in toroidal and toroidal-shell geometries using an analytical framework developed for the solution to the time-harmonic Maxwell equations. Taylor states are a special case of what are known as Beltrami fields, or linear force-free fields. The scheme of this work relies on the generalized Debye source representation of Maxwell fields and an integral representation of Beltrami fields which immediately yields a well-conditioned second-kind integral equation. This integral equation has a unique solution whenever the Beltrami parameter λ is not a member of a discrete, countable set of resonances which physically correspond to spontaneous symmetry breaking. Several numerical examples relevant to magnetohydrodynamic equilibria calculations are provided. Lastly, our approach easily generalizes to arbitrary geometries, both bounded and unbounded, and of varying genus.

  15. Systematic cavity design approach for a multi-frequency gyrotron for DEMO and study of its RF behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalaria, P. C., E-mail: parth.kalaria@partner.kit.edu; Avramidis, K. A.; Franck, J.

    High frequency (>230 GHz) megawatt-class gyrotrons are planned as RF sources for electron cyclotron resonance heating and current drive in DEMOnstration fusion power plants (DEMOs). In this paper, for the first time, a feasibility study of a 236 GHz DEMO gyrotron is presented by considering all relevant design goals and the possible technical limitations. A mode-selection procedure is proposed in order to satisfy the multi-frequency and frequency-step tunability requirements. An effective systematic design approach for the optimal design of a gradually tapered cavity is presented. The RF-behavior of the proposed cavity is verified rigorously, supporting 920 kW of stable output power withmore » an interaction efficiency of 36% including the considerations of realistic beam parameters.« less

  16. Measuring the jitter of ring oscillators by means of information theory quantifiers

    NASA Astrophysics Data System (ADS)

    Antonelli, M.; De Micco, L.; Larrondo, H. A.

    2017-02-01

    Ring oscillators (RO's) are elementary blocks widely used in digital design. Jitter is unavoidable in RO's, its presence is an undesired behavior in many applications, as clock generators. On the contrary, jitter may be used as the noise source in RO-based true-random numbers generators (TRNG). Consequently, jitter measure is a relevant issue to characterize a RO, and it is the subject of this paper. The main contribution is the use of Information Theory Quantifiers (ITQ) as measures of RO's jitter. It is shown that among several ITQ evaluated, two of them emerge as good measures because they are independent of parameters used for their statistical determination. They turned out to be robust and may be implemented experimentally. We encountered that a dual entropy plane allows a visual comparison of results.

  17. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  18. Inference of relativistic electron spectra from measurements of inverse Compton radiation

    NASA Astrophysics Data System (ADS)

    Craig, I. J. D.; Brown, J. C.

    1980-07-01

    The inference of relativistic electron spectra from spectral measurement of inverse Compton radiation is discussed for the case where the background photon spectrum is a Planck function. The problem is formulated in terms of an integral transform that relates the measured spectrum to the unknown electron distribution. A general inversion formula is used to provide a quantitative assessment of the information content of the spectral data. It is shown that the observations must generally be augmented by additional information if anything other than a rudimentary two or three parameter model of the source function is to be derived. It is also pointed out that since a similar equation governs the continuum spectra emitted by a distribution of black-body radiators, the analysis is relevant to the problem of stellar population synthesis from galactic spectra.

  19. Reducing pathogens in combined sewer overflows using ozonation or UV irradiation.

    PubMed

    Tondera, Katharina; Klaer, Kassandra; Gebhardt, Jens; Wingender, Jost; Koch, Christoph; Horstkott, Marina; Strathmann, Martin; Jurzik, Lars; Hamza, Ibrahim Ahmed; Pinnekamp, Johannes

    2015-11-01

    Fecal contamination of water resources is a major public health concern in densely populated areas since these water bodies are used for drinking water production or recreational purposes. A main source of this contamination originates from combined sewer overflows (CSOs) in regions with combined sewer systems. Thus, the treatment of CSO discharges is urgent. In this study, we explored whether ozonation or UV irradiation can efficiently reduce pathogenic bacteria, viruses, and protozoan parasites in CSOs. Experiments were carried out in parallel settings at the outflow of a stormwater settling tank in the Ruhr area, Germany. The results showed that both techniques reduce most hygienically relevant bacteria, parasites and viruses. Under the conditions tested, ozonation yielded lower outflow values for the majority of the tested parameters. Copyright © 2015 Elsevier GmbH. All rights reserved.

  20. Consistent Simulation Framework for Efficient Mass Discharge and Source Depletion Time Predictions of DNAPL Contaminants in Heterogeneous Aquifers Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Koch, J.

    2014-12-01

    Predicting DNAPL fate and transport in heterogeneous aquifers is challenging and subject to an uncertainty that needs to be quantified. Models for this task needs to be equipped with an accurate source zone description, i.e., the distribution of mass of all partitioning phases (DNAPL, water, and soil) in all possible states ((im)mobile, dissolved, and sorbed), mass-transfer algorithms, and the simulation of transport processes in the groundwater. Such detailed models tend to be computationally cumbersome when used for uncertainty quantification. Therefore, a selective choice of the relevant model states, processes, and scales are both sensitive and indispensable. We investigate the questions: what is a meaningful level of model complexity and how to obtain an efficient model framework that is still physically and statistically consistent. In our proposed model, aquifer parameters and the contaminant source architecture are conceptualized jointly as random space functions. The governing processes are simulated in a three-dimensional, highly-resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. We apply a stochastic percolation approach as an emulator to simulate the contaminant source formation, a random walk particle tracking method to simulate DNAPL dissolution and solute transport within the aqueous phase, and a quasi-steady-state approach to solve for DNAPL depletion times. Using this novel model framework, we test whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. With this we identify that aquifer heterogeneity, groundwater flow irregularity, uncertain and physically-based contaminant source zones, and their mutual interlinkages are indispensable components of a sound model framework.

  1. Relevance similarity: an alternative means to monitor information retrieval systems

    PubMed Central

    Dong, Peng; Loh, Marie; Mondry, Adrian

    2005-01-01

    Background Relevance assessment is a major problem in the evaluation of information retrieval systems. The work presented here introduces a new parameter, "Relevance Similarity", for the measurement of the variation of relevance assessment. In a situation where individual assessment can be compared with a gold standard, this parameter is used to study the effect of such variation on the performance of a medical information retrieval system. In such a setting, Relevance Similarity is the ratio of assessors who rank a given document same as the gold standard over the total number of assessors in the group. Methods The study was carried out on a collection of Critically Appraised Topics (CATs). Twelve volunteers were divided into two groups of people according to their domain knowledge. They assessed the relevance of retrieved topics obtained by querying a meta-search engine with ten keywords related to medical science. Their assessments were compared to the gold standard assessment, and Relevance Similarities were calculated as the ratio of positive concordance with the gold standard for each topic. Results The similarity comparison among groups showed that a higher degree of agreements exists among evaluators with more subject knowledge. The performance of the retrieval system was not significantly different as a result of the variations in relevance assessment in this particular query set. Conclusion In assessment situations where evaluators can be compared to a gold standard, Relevance Similarity provides an alternative evaluation technique to the commonly used kappa scores, which may give paradoxically low scores in highly biased situations such as document repositories containing large quantities of relevant data. PMID:16029513

  2. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  3. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  4. Analysis of temporal decay of diffuse broadband sound fields in enclosures by decomposition in powers of an absorption parameter

    NASA Astrophysics Data System (ADS)

    Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben

    2005-09-01

    An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.

  5. Source characterization and modeling development for monoenergetic-proton radiography experiments on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manuel, M. J.-E.; Zylstra, A. B.; Rinderknecht, H. G.

    2012-06-15

    A monoenergetic proton source has been characterized and a modeling tool developed for proton radiography experiments at the OMEGA [T. R. Boehly et al., Opt. Comm. 133, 495 (1997)] laser facility. Multiple diagnostics were fielded to measure global isotropy levels in proton fluence and images of the proton source itself provided information on local uniformity relevant to proton radiography experiments. Global fluence uniformity was assessed by multiple yield diagnostics and deviations were calculated to be {approx}16% and {approx}26% of the mean for DD and D{sup 3}He fusion protons, respectively. From individual fluence images, it was found that the angular frequenciesmore » of Greater-Than-Or-Equivalent-To 50 rad{sup -1} contributed less than a few percent to local nonuniformity levels. A model was constructed using the Geant4 [S. Agostinelli et al., Nuc. Inst. Meth. A 506, 250 (2003)] framework to simulate proton radiography experiments. The simulation implements realistic source parameters and various target geometries. The model was benchmarked with the radiographs of cold-matter targets to within experimental accuracy. To validate the use of this code, the cold-matter approximation for the scattering of fusion protons in plasma is discussed using a typical laser-foil experiment as an example case. It is shown that an analytic cold-matter approximation is accurate to within Less-Than-Or-Equivalent-To 10% of the analytic plasma model in the example scenario.« less

  6. Analysis and Sizing for Transient Thermal Heating of Insulated Aerospace Vehicle Structures

    NASA Technical Reports Server (NTRS)

    Blosser, Max L.

    2012-01-01

    An analytical solution was derived for the transient response of an insulated structure subjected to a simplified heat pulse. The solution is solely a function of two nondimensional parameters. Simpler functions of these two parameters were developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective thermal properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Equations were also developed for the minimum mass required to maintain the inner, unheated surface below a specified temperature. In the course of the derivation, two figures of merit were identified. Required insulation masses calculated using the approximate equation were shown to typically agree with finite element results within 10%-20% over the relevant range of parameters studied.

  7. An Analytical Solution for Transient Thermal Response of an Insulated Structure

    NASA Technical Reports Server (NTRS)

    Blosser, Max L.

    2012-01-01

    An analytical solution was derived for the transient response of an insulated aerospace vehicle structure subjected to a simplified heat pulse. This simplified problem approximates the thermal response of a thermal protection system of an atmospheric entry vehicle. The exact analytical solution is solely a function of two non-dimensional parameters. A simpler function of these two parameters was developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Using these techniques, the maximum structural temperature rise was calculated using the analytical solutions and shown to typically agree with finite element simulations within 10 to 20 percent over the relevant range of parameters studied.

  8. Quantitative microscopy of the lung: a problem-based approach. Part 2: stereological parameters and study designs in various diseases of the respiratory tract.

    PubMed

    Mühlfeld, Christian; Ochs, Matthias

    2013-08-01

    Design-based stereology provides efficient methods to obtain valuable quantitative information of the respiratory tract in various diseases. However, the choice of the most relevant parameters in a specific disease setting has to be deduced from the present pathobiological knowledge. Often it is difficult to express the pathological alterations by interpretable parameters in terms of volume, surface area, length, or number. In the second part of this companion review article, we analyze the present pathophysiological knowledge about acute lung injury, diffuse parenchymal lung diseases, emphysema, pulmonary hypertension, and asthma to come up with recommendations for the disease-specific application of stereological principles for obtaining relevant parameters. Worked examples with illustrative images are used to demonstrate the work flow, estimation procedure, and calculation and to facilitate the practical performance of equivalent analyses.

  9. Photutils: Photometry tools

    NASA Astrophysics Data System (ADS)

    Bradley, Larry; Sipocz, Brigitta; Robitaille, Thomas; Tollerud, Erik; Deil, Christoph; Vinícius, Zè; Barbary, Kyle; Günther, Hans Moritz; Bostroem, Azalee; Droettboom, Michael; Bray, Erik; Bratholm, Lars Andersen; Pickering, T. E.; Craig, Matt; Pascual, Sergio; Greco, Johnny; Donath, Axel; Kerzendorf, Wolfgang; Littlefair, Stuart; Barentsen, Geert; D'Eugenio, Francesco; Weaver, Benjamin Alan

    2016-09-01

    Photutils provides tools for detecting and performing photometry of astronomical sources. It can estimate the background and background rms in astronomical images, detect sources in astronomical images, estimate morphological parameters of those sources (e.g., centroid and shape parameters), and perform aperture and PSF photometry. Written in Python, it is an affiliated package of Astropy (ascl:1304.002).

  10. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  11. QuakeUp: An advanced tool for a network-based Earthquake Early Warning system

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo

    2017-04-01

    The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Within times of the order of ten seconds from the earthquake origin, the information about the area where moderate to strong ground shaking is expected to occur, can be sent to inner and outer sites, allowing the activation of emergency measurements to protect people , secure industrial facilities and optimize the site resilience after the disaster. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. In QuakeUp, the P-wave parameters are continuously measured, using progressively expanded P-wave time windows, and providing evolutionary and reliable estimates of the ground shaking distribution, especially in the case of very large events. Furthermore, to minimize the S-wave contamination on the P-wave signal portion, an efficient algorithm, based on the real-time polarization analysis of the three-component seismogram, for the automatic detection of the S-wave arrival time has been included. The final output of QuakeUp will be an automatic alert message that is transmitted to sites to be secured during the earthquake emergency. The message contains all relevant information about the expected potential damage at the site and the time available for security actions (lead-time) after the warning. A global view of the system performance during and after the event (in play-back mode) is obtained through an end-user visual display, where the most relevant pieces of information will be displayed and updated as soon as new data are available. The software platform Quake-Up is essentially aimed at improving the reliability and the accuracy in terms of parameter estimation, minimizing the uncertainties in the real-time estimations without losing the essential requirements of speediness and robustness, which are needed to activate rapid emergency actions.

  12. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.

  13. Fast prediction and evaluation of eccentric inspirals using reduced-order models

    NASA Astrophysics Data System (ADS)

    Barta, Dániel; Vasúth, Mátyás

    2018-06-01

    A large number of theoretically predicted waveforms are required by matched-filtering searches for the gravitational-wave signals produced by compact binary coalescence. In order to substantially alleviate the computational burden in gravitational-wave searches and parameter estimation without degrading the signal detectability, we propose a novel reduced-order-model (ROM) approach with applications to adiabatic 3PN-accurate inspiral waveforms of nonspinning sources that evolve on either highly or slightly eccentric orbits. We provide a singular-value decomposition-based reduced-basis method in the frequency domain to generate reduced-order approximations of any gravitational waves with acceptable accuracy and precision within the parameter range of the model. We construct efficient reduced bases comprised of a relatively small number of the most relevant waveforms over three-dimensional parameter-space covered by the template bank (total mass 2.15 M⊙≤M ≤215 M⊙ , mass ratio 0.01 ≤q ≤1 , and initial orbital eccentricity 0 ≤e0≤0.95 ). The ROM is designed to predict signals in the frequency band from 10 Hz to 2 kHz for aLIGO and aVirgo design sensitivity. Beside moderating the data reduction, finer sampling of fiducial templates improves the accuracy of surrogates. Considerable increase in the speedup from several hundreds to thousands can be achieved by evaluating surrogates for low-mass systems especially when combined with high-eccentricity.

  14. A preliminary report of music-based training for adult cochlear implant users: Rationales and development.

    PubMed

    Gfeller, Kate; Guthe, Emily; Driscoll, Virginia; Brown, Carolyn J

    2015-09-01

    This paper provides a preliminary report of a music-based training program for adult cochlear implant (CI) recipients. Included in this report are descriptions of the rationale for music-based training, factors influencing program development, and the resulting program components. Prior studies describing experience-based plasticity in response to music training, auditory training for persons with hearing impairment, and music training for CI recipients were reviewed. These sources revealed rationales for using music to enhance speech, factors associated with successful auditory training, relevant aspects of electric hearing and music perception, and extant evidence regarding limitations and advantages associated with parameters for music training with CI users. This informed the development of a computer-based music training program designed specifically for adult CI users. Principles and parameters for perceptual training of music, such as stimulus choice, rehabilitation approach, and motivational concerns were developed in relation to the unique auditory characteristics of adults with electric hearing. An outline of the resulting program components and the outcome measures for evaluating program effectiveness are presented. Music training can enhance the perceptual accuracy of music, but is also hypothesized to enhance several features of speech with similar processing requirements as music (e.g., pitch and timbre). However, additional evaluation of specific training parameters and the impact of music-based training on speech perception of CI users is required.

  15. Toxicity of biosolids-derived triclosan and triclocarban to six crop species.

    PubMed

    Prosser, Ryan S; Lissemore, Linda; Solomon, Keith R; Sibley, Paul K

    2014-08-01

    Biosolids are an important source of nutrients and organic matter, which are necessary for the productive cultivation of crop plants. Biosolids have been found to contain the personal care products triclosan and triclocarban at high concentrations relative to other pharmaceuticals and personal care products. The present study investigates whether exposure of 6 plant species (radish, carrot, soybean, lettuce, spring wheat, and corn) to triclosan or triclocarban derived from biosolids has an adverse effect on seed emergence and/or plant growth parameters. Plants were grown in soil amended with biosolids at a realistic agronomic rate. Biosolids were spiked with triclosan or triclocarban to produce increasing environmentally relevant exposures. The concentration of triclosan and triclocarban in biosolids-amended soil declined by up to 97% and 57%, respectively, over the course of the experiments. Amendment with biosolids had a positive effect on the majority of growth parameters in radish, carrot, soybean, lettuce, and wheat plants. No consistent triclosan- or triclocarban-dependent trends in seed emergence and plant growth parameters were observed in 5 of 6 plant species. A significant negative trend in shoot mass was observed for lettuce plants exposed to increasing concentrations of triclocarban (p<0.001). If best management practices are followed for biosolids amendment, triclosan and triclocarban pose a negligible risk to seed emergence and growth of crop plants. © 2014 SETAC.

  16. Imprints of a light sterile neutrino at DUNE, T2HK, and T2HKK

    NASA Astrophysics Data System (ADS)

    Choubey, Sandhya; Dutta, Debajyoti; Pramanik, Dipyaman

    2017-09-01

    We evaluate the impact of sterile neutrino oscillations in the so-called 3 +1 scenario on the proposed long baseline experiment in USA and Japan. There are two proposals for the Japan experiment which are called T2HK and T2HKK. We show the impact of sterile neutrino oscillation parameters on the expected sensitivity of T2HK and T2HKK to mass hierarchy, C P violation and octant of θ23 and compare it against that expected in the case of standard oscillations. We add the expected ten years data from DUNE and present the combined expected sensitivity of T 2 HKK +DUNE to the oscillation parameters. We do a full marginalization over the relevant parameter space and show the effect of the magnitude of the true sterile mixing angles on the physics reach of these experiments. We show that if one assumes that the source of C P violation is the standard C P phase alone in the test case, then it appears that the expected C P violation sensitivity decreases due to sterile neutrinos. However, if we give up this assumption, then the C P sensitivity could go in either direction. The impact on expected octant of θ23 and mass hierarchy sensitivity is shown to depend on the magnitude of the sterile mixing angles in a nontrivial way.

  17. A preliminary report of music-based training for adult cochlear implant users: rationales and development

    PubMed Central

    Gfeller, Kate; Guthe, Emily; Driscoll, Virginia; Brown, Carolyn J.

    2015-01-01

    Objective This paper provides a preliminary report of a music-based training program for adult cochlear implant (CI) recipients. Included in this report are descriptions of the rationale for music-based training, factors influencing program development, and the resulting program components. Methods Prior studies describing experience-based plasticity in response to music training, auditory training for persons with hearing impairment, and music training for cochlear implant recipients were reviewed. These sources revealed rationales for using music to enhance speech, factors associated with successful auditory training, relevant aspects of electric hearing and music perception, and extant evidence regarding limitations and advantages associated with parameters for music training with CI users. This information formed the development of a computer-based music training program designed specifically for adult CI users. Results Principles and parameters for perceptual training of music, such as stimulus choice, rehabilitation approach, and motivational concerns were developed in relation to the unique auditory characteristics of adults with electric hearing. An outline of the resulting program components and the outcome measures for evaluating program effectiveness are presented. Conclusions Music training can enhance the perceptual accuracy of music, but is also hypothesized to enhance several features of speech with similar processing requirements as music (e.g., pitch and timbre). However, additional evaluation of specific training parameters and the impact of music-based training on speech perception of CI users are required. PMID:26561884

  18. Completeness of reporting of patient-relevant clinical trial outcomes: comparison of unpublished clinical study reports with publicly available data.

    PubMed

    Wieseler, Beate; Wolfram, Natalia; McGauran, Natalie; Kerekes, Michaela F; Vervölgyi, Volker; Kohlepp, Petra; Kamphuis, Marloes; Grouven, Ulrich

    2013-10-01

    Access to unpublished clinical study reports (CSRs) is currently being discussed as a means to allow unbiased evaluation of clinical research. The Institute for Quality and Efficiency in Health Care (IQWiG) routinely requests CSRs from manufacturers for its drug assessments. Our objective was to determine the information gain from CSRs compared to publicly available sources (journal publications and registry reports) for patient-relevant outcomes included in IQWiG health technology assessments (HTAs) of drugs. We used a sample of 101 trials with full CSRs received for 16 HTAs of drugs completed by IQWiG between 15 January 2006 and 14 February 2011, and analyzed the CSRs and the publicly available sources of these trials. For each document type we assessed the completeness of information on all patient-relevant outcomes included in the HTAs (benefit outcomes, e.g., mortality, symptoms, and health-related quality of life; harm outcomes, e.g., adverse events). We dichotomized the outcomes as "completely reported" or "incompletely reported." For each document type, we calculated the proportion of outcomes with complete information per outcome category and overall. We analyzed 101 trials with CSRs; 86 had at least one publicly available source, 65 at least one journal publication, and 50 a registry report. The trials included 1,080 patient-relevant outcomes. The CSRs provided complete information on a considerably higher proportion of outcomes (86%) than the combined publicly available sources (39%). With the exception of health-related quality of life (57%), CSRs provided complete information on 78% to 100% of the various benefit outcomes (combined publicly available sources: 20% to 53%). CSRs also provided considerably more information on harms. The differences in completeness of information for patient-relevant outcomes between CSRs and journal publications or registry reports (or a combination of both) were statistically significant for all types of outcomes. The main limitation of our study is that our sample is not representative because only CSRs provided voluntarily by pharmaceutical companies upon request could be assessed. In addition, the sample covered only a limited number of therapeutic areas and was restricted to randomized controlled trials investigating drugs. In contrast to CSRs, publicly available sources provide insufficient information on patient-relevant outcomes of clinical trials. CSRs should therefore be made publicly available. Please see later in the article for the Editors' Summary.

  19. Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.

    PubMed

    Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin

    2010-04-16

    Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.

  20. Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS

    PubMed Central

    2010-01-01

    Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504

  1. Research on Matching Method of Power Supply Parameters for Dual Energy Source Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Jiang, Q.; Luo, M. J.; Zhang, S. K.; Liao, M. W.

    2018-03-01

    A new type of power source is proposed, which is based on the traffic signal matching method of the dual energy source power supply composed of the batteries and the supercapacitors. First, analyzing the power characteristics is required to meet the excellent dynamic characteristics of EV, studying the energy characteristics is required to meet the mileage requirements and researching the physical boundary characteristics is required to meet the physical conditions of the power supply. Secondly, the parameter matching design with the highest energy efficiency is adopted to select the optimal parameter group with the method of matching deviation. Finally, the simulation analysis of the vehicle is carried out in MATLABSimulink, The mileage and energy efficiency of dual energy sources are analyzed in different parameter models, and the rationality of the matching method is verified.

  2. 50 CFR 648.141 - Closure.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Bass Monitoring Committee shall identify and review the relevant sources of management uncertainty to... sources of management uncertainty that were considered, technical approaches to mitigating these sources..., DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the Black Sea Bass...

  3. Influence of source batch S{sub K} dispersion on dosimetry for prostate cancer treatment with permanent implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuñez-Cumplido, E., E-mail: ejnc-mccg@hotmail.com; Hernandez-Armas, J.; Perez-Calatayud, J.

    2015-08-15

    Purpose: In clinical practice, specific air kerma strength (S{sub K}) value is used in treatment planning system (TPS) permanent brachytherapy implant calculations with {sup 125}I and {sup 103}Pd sources; in fact, commercial TPS provide only one S{sub K} input value for all implanted sources and the certified shipment average is typically used. However, the value for S{sub K} is dispersed: this dispersion is not only due to the manufacturing process and variation between different source batches but also due to the classification of sources into different classes according to their S{sub K} values. The purpose of this work is tomore » examine the impact of S{sub K} dispersion on typical implant parameters that are used to evaluate the dose volume histogram (DVH) for both planning target volume (PTV) and organs at risk (OARs). Methods: The authors have developed a new algorithm to compute dose distributions with different S{sub K} values for each source. Three different prostate volumes (20, 30, and 40 cm{sup 3}) were considered and two typical commercial sources of different radionuclides were used. Using a conventional TPS, clinically accepted calculations were made for {sup 125}I sources; for the palladium, typical implants were simulated. To assess the many different possible S{sub K} values for each source belonging to a class, the authors assigned an S{sub K} value to each source in a randomized process 1000 times for each source and volume. All the dose distributions generated for each set of simulations were assessed through the DVH distributions comparing with dose distributions obtained using a uniform S{sub K} value for all the implanted sources. The authors analyzed several dose coverage (V{sub 100} and D{sub 90}) and overdosage parameters for prostate and PTV and also the limiting and overdosage parameters for OARs, urethra and rectum. Results: The parameters analyzed followed a Gaussian distribution for the entire set of computed dosimetries. PTV and prostate V{sub 100} and D{sub 90} variations ranged between 0.2% and 1.78% for both sources. Variations for the overdosage parameters V{sub 150} and V{sub 200} compared to dose coverage parameters were observed and, in general, variations were larger for parameters related to {sup 125}I sources than {sup 103}Pd sources. For OAR dosimetry, variations with respect to the reference D{sub 0.1cm{sup 3}} were observed for rectum values, ranging from 2% to 3%, compared with urethra values, which ranged from 1% to 2%. Conclusions: Dose coverage for prostate and PTV was practically unaffected by S{sub K} dispersion, as was the maximum dose deposited in the urethra due to the implant technique geometry. However, the authors observed larger variations for the PTV V{sub 150}, rectum V{sub 100}, and rectum D{sub 0.1cm{sup 3}} values. The variations in rectum parameters were caused by the specific location of sources with S{sub K} value that differed from the average in the vicinity. Finally, on comparing the two sources, variations were larger for {sup 125}I than for {sup 103}Pd. This is because for {sup 103}Pd, a greater number of sources were used to obtain a valid dose distribution than for {sup 125}I, resulting in a lower variation for each S{sub K} value for each source (because the variations become averaged out statistically speaking)« less

  4. Volcanic eruption source parameters from active and passive microwave sensors

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Marzano, Frank S.; Cimini, Domenico; Mereu, Luigi

    2016-04-01

    It is well known, in the volcanology community, that precise information of the source parameters characterising an eruption are of predominant interest for the initialization of the Volcanic Transport and Dispersion Models (VTDM). Source parameters of main interest would be the top altitude of the volcanic plume, the flux of the mass ejected at the emission source, which is strictly related to the cloud top altitude, the distribution of volcanic mass concentration along the vertical column as well as the duration of the eruption and the erupted volume. Usually, the combination of a-posteriori field and numerical studies allow constraining the eruption source parameters for a given volcanic event thus making possible the forecast of ash dispersion and deposition from future volcanic eruptions. So far, remote sensors working at visible and infrared channels (cameras and radiometers) have been mainly used to detect, track and provide estimates of the concentration content and the prevailing size of the particles propagating within the ash clouds up to several thousand of kilometres far from the source as well as track back, a-posteriori, the accuracy of the VATDM outputs thus testing the initial choice made for the source parameters. Acoustic wave (infrasound) and microwave fixed scan radar (voldorad) were also used to infer source parameters. In this work we want to put our attention on the role of sensors operating at microwave wavelengths as complementary tools for the real time estimations of source parameters. Microwaves can benefit of the operability during night and day and a relatively negligible sensitivity to the presence of clouds (non precipitating weather clouds) at the cost of a limited coverage and larger spatial resolution when compared with infrared sensors. Thanks to the aforementioned advantages, the products from microwaves sensors are expected to be sensible mostly to the whole path traversed along the tephra cloud making microwaves particularly appealing for estimates close to the volcano emission source. Near the source the cloud optical thickness is expected to be large enough to induce saturation effects at the infrared sensor receiver thus vanishing the brightness temperature difference methods for the ash cloud identification. In the light of the introduction above, some case studies at Eyjafjallajökull 2010 (Iceland), Etna (Italy) and Calbuco (Cile), on 5-10 May 2010, 23rd Nov., 2013 and 23 Apr., 2015, respectively, are analysed in terms of source parameter estimates (manly the cloud top and mass flax rate) from ground based microwave weather radar (9.6 GHz) and satellite Low Earth Orbit microwave radiometers (50 - 183 GH). A special highlight will be given to the advantages and limitations of microwave-related products with respect to more conventional tools.

  5. LIBP-Pred: web server for lipid binding proteins using structural network parameters; PDB mining of human cancer biomarkers and drug targets in parasites and bacteria.

    PubMed

    González-Díaz, Humberto; Munteanu, Cristian R; Postelnicu, Lucian; Prado-Prado, Francisco; Gestal, Marcos; Pazos, Alejandro

    2012-03-01

    Lipid-Binding Proteins (LIBPs) or Fatty Acid-Binding Proteins (FABPs) play an important role in many diseases such as different types of cancer, kidney injury, atherosclerosis, diabetes, intestinal ischemia and parasitic infections. Thus, the computational methods that can predict LIBPs based on 3D structure parameters became a goal of major importance for drug-target discovery, vaccine design and biomarker selection. In addition, the Protein Data Bank (PDB) contains 3000+ protein 3D structures with unknown function. This list, as well as new experimental outcomes in proteomics research, is a very interesting source to discover relevant proteins, including LIBPs. However, to the best of our knowledge, there are no general models to predict new LIBPs based on 3D structures. We developed new Quantitative Structure-Activity Relationship (QSAR) models based on 3D electrostatic parameters of 1801 different proteins, including 801 LIBPs. We calculated these electrostatic parameters with the MARCH-INSIDE software and they correspond to the entire protein or to specific protein regions named core, inner, middle, and surface. We used these parameters as inputs to develop a simple Linear Discriminant Analysis (LDA) classifier to discriminate 3D structure of LIBPs from other proteins. We implemented this predictor in the web server named LIBP-Pred, freely available at , along with other important web servers of the Bio-AIMS portal. The users can carry out an automatic retrieval of protein structures from PDB or upload their custom protein structural models from their disk created with LOMETS server. We demonstrated the PDB mining option performing a predictive study of 2000+ proteins with unknown function. Interesting results regarding the discovery of new Cancer Biomarkers in humans or drug targets in parasites have been discussed here in this sense.

  6. Constraining parameters of white-dwarf binaries using gravitational-wave and electromagnetic observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Sweta; Nelemans, Gijs, E-mail: s.shah@astro.ru.nl

    The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, themore » inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Stacy; English, Shawn; Briggs, Timothy

    Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less

  8. Flight parameter estimation using instantaneous frequency and direction of arrival measurements from a single acoustic sensor node.

    PubMed

    Lo, Kam W

    2017-03-01

    When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.

  9. Langlands Parameters of Quivers in the Sato Grassmannian

    NASA Astrophysics Data System (ADS)

    Luu, Martin T.; Penciak, Matej

    2018-01-01

    Motivated by quantum field theoretic partition functions that can be expressed as products of tau functions of the KP hierarchy we attach several types of local geometric Langlands parameters to quivers in the Sato Grassmannian. We study related questions of Virasoro constraints, of moduli spaces of relevant quivers, and of classical limits of the Langlands parameters.

  10. The Use of Logistics n the Quality Parameters Control System of Material Flow

    ERIC Educational Resources Information Center

    Karpova, Natalia P.; Toymentseva, Irina A.; Shvetsova, Elena V.; Chichkina, Vera D.; Chubarkova, Elena V.

    2016-01-01

    The relevance of the research problem is conditioned on the need to justify the use of the logistics methodologies in the quality parameters control process of material flows. The goal of the article is to develop theoretical principles and practical recommendations for logistical system control in material flows quality parameters. A leading…

  11. Thermoelectric DC conductivities in hyperscaling violating Lifshitz theories

    NASA Astrophysics Data System (ADS)

    Cremonini, Sera; Cvetič, Mirjam; Papadimitriou, Ioannis

    2018-04-01

    We analytically compute the thermoelectric conductivities at zero frequency (DC) in the holographic dual of a four dimensional Einstein-Maxwell-Axion-Dilaton theory that admits a class of asymptotically hyperscaling violating Lifshitz backgrounds with a dynamical exponent z and hyperscaling violating parameter θ. We show that the heat current in the dual Lifshitz theory involves the energy flux, which is an irrelevant operator for z > 1. The linearized fluctuations relevant for computing the thermoelectric conductivities turn on a source for this irrelevant operator, leading to several novel and non-trivial aspects in the holographic renormalization procedure and the identification of the physical observables in the dual theory. Moreover, imposing Dirichlet or Neumann boundary conditions on the spatial components of one of the two Maxwell fields present leads to different thermoelectric conductivities. Dirichlet boundary conditions reproduce the thermoelectric DC conductivities obtained from the near horizon analysis of Donos and Gauntlett, while Neumann boundary conditions result in a new set of DC conductivities. We make preliminary analytical estimates for the temperature behavior of the thermoelectric matrix in appropriate regions of parameter space. In particular, at large temperatures we find that the only case which could lead to a linear resistivity ρ ˜ T corresponds to z = 4 /3.

  12. STS-13 (41-C) BET products

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Kelly, G. M.; Mcconnell, J. G.; Heck, M. L.

    1984-01-01

    Results from the STS-13 (41-C) Shuttle entry flight are presented. The entry trajectory was reconstructed from an altitude of 700 kft through rollout on Runway 17 at EAFB. The anchor epoch utilized was April 13, 1984 13(h)1(m)30.(s)0 (46890(s).0) GMT. The final reconstructed inertial trajectory for this flight is BT13M23 under user catalog 169750N. Trajectory reconstruction and Extended BET development are discussed in Section 1 and 2, respectively. The NOAA totem-pole atmosphere extracted from the JSC/TRW BET was adopted in the development of the LaRC Extended BET, namely ST13BET/UN=274885C. The Aerodynamic BET was generated on physical nine track reel NC0728 with a duplicate copy on NC0740 for back-up. Plots of the more relevant parameters from the AEROBET are presented in Section 3. Section 4 discusses the MMLE input files created for STS-13. Appendices are attached which present spacecraft and physical constants utilized (Appendix A), residuals by station and data type (Appendix B), a two second spaced listing of trajectory and air data parameters (Appendix C), and input and output source products for archival (Appendix D).

  13. Inclusion of TCAF model in XSPEC to study accretion flow dynamics around black hole candidates

    NASA Astrophysics Data System (ADS)

    Debnath, Dipak; Chakrabarti, Sandip Kumar; Mondal, Santanu

    Spectral and Temporal properties of black hole candidates can be well understood with the Chakrabarti-Titarchuk solution of two component advective flow (TCAF). This model requires two accretion rates, namely, the Keplerian disk accretion rate and the sub-Keplerian halo accretion rate, the latter being composed of a low angular momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disk rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time we are able to create a user friendly version by implementing additive Table model FITS file into GSFC/NASA's spectral analysis software package XSPEC. This enables any user to extract physical parameters of accretion flows, such as two accretion rates, shock location, shock strength etc. for any black hole candidate. Most importantly, unlike any other theoretical model, we show that TCAF is capable of predicting timing properties from spectral fits, since in TCAF, a shock is responsible for deciding spectral slopes as well as QPO frequencies.

  14. Optical and positron annihilation spectroscopic studies on PMMA polymer doped by rhodamine B/chloranilic acid charge transfer complex: Special relevance to the effect of γ-ray irradiation.

    PubMed

    Hassan, H E; Refat, Moamen S; Sharshar, T

    2016-04-15

    Polymeric sheets of poly (methylmethaclyerate) (PMMA) containing charge transfer (CT) complex of rhodamine B/chloranilic acid (Rho B/CHA) were synthesized in methanol solvent at room temperature. The systematic analysis done on the Rho B and its CT complex in the form of powder or polymeric sheets confirmed their structure and thermal stability. The IR spectra interpreted the charge transfer mode of interaction between the CHA central positions and the terminal carboxylic group. The polymer sheets were irradiated with 70 kGy of γ radiation using (60)Co source to study the enhanced changes in the structure and optical parameters. The microstructure changes of the PMMA sheets caused by γ-ray irradiation were analyzed using positron annihilation lifetime (PAL) and positron annihilation Doppler broadening (PADB) techniques. The positron life time components (τ(i)) and their corresponding intensities (I(i)) as well as PADB line-shape parameters (S and W) were found to be highly sensitive to the enhanced disorder occurred in the organic chains of the polymeric sheets due to γ-irradiation. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Image-based spectroscopy for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind

    2014-03-01

    An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.

  16. Characterization of a dielectric barrier discharge in controlled atmosphere

    NASA Astrophysics Data System (ADS)

    Kogelheide, Friederike; Offerhaus, Björn; Bibinov, Nikita; Bracht, Vera; Smith, Ryan; Lackmann, Jan-Wilm; Awakowicz, Peter; Stapelmann, Katharina; Bimap Team; Aept Team

    2016-09-01

    Non-thermal atmospheric-pressure plasmas are advantageous for various biomedical applications as they make a contact- and painless therapy possible. Due to the potential medical relevance of such plasma sources further understanding of the chemical and physical impact on biological tissue regarding the efficacy and health-promoting effect is necessary. The knowledge of properties and effects offers the possibility to configure plasmas free of risk for humans. Therefore, tailoring the discharge chemistry in regard to resulting oxidative and nitrosative effects on biological tissue by adjusting different parameters is of growing interest. In order to ensure stable conditions for the characterization of the discharge, the used dielectric barrier discharge was mounted in a vessel. Absolutely calibrated optical emission spectroscopy was carried out to analyze the electron density and the reduced electric field. The rather oxygen-based discharge was tuned towards a more nitrogen-based discharge by adjusting several parameters as reactive nitrogen species are known to promote wound healing. Furthermore, the impact of an ozone-free discharge has to be studied. This work was funded by the German Research Foundation (DFG) with the packet grant PAK 816 `Plasma Cell Interaction in Dermatology'.

  17. An Inverse Modeling Plugin for HydroDesktop using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio, C.; Over, M. W.; Rubin, Y.

    2011-12-01

    The CUAHSI Hydrologic Information System (HIS) software stack is based on an open and extensible architecture that facilitates the addition of new functions and capabilities at both the server side (using HydroServer) and the client side (using HydroDesktop). The HydroDesktop client plugin architecture is used here to expose a new scripting based plugin that makes use of the R statistics software as a means for conducting inverse modeling using the Method of Anchored Distributions (MAD). MAD is a Bayesian inversion technique for conditioning computational model parameters on relevant field observations yielding probabilistic distributions of the model parameters, related to the spatial random variable of interest, by assimilating multi-type and multi-scale data. The implementation of a desktop software tool for using the MAD technique is expected to significantly lower the barrier to use of inverse modeling in education, research, and resource management. The HydroDesktop MAD plugin is being developed following a community-based, open-source approach that will help both its adoption and long term sustainability as a user tool. This presentation will briefly introduce MAD, HydroDesktop, and the MAD plugin and software development effort.

  18. Monte Carlo Determination of Dosimetric Parameters of a New (125)I Brachytherapy Source According to AAPM TG-43 (U1) Protocol.

    PubMed

    Baghani, Hamid Reza; Lohrabian, Vahid; Aghamiri, Mahmoud Reza; Robatjazi, Mostafa

    2016-03-01

    (125)I is one of the important sources frequently used in brachytherapy. Up to now, several different commercial models of this source type have been introduced to the clinical radiation oncology applications. Recently, a new source model, IrSeed-125, has been added to this list. The aim of the present study is to determine the dosimetric parameters of this new source model based on the recommendations of TG-43 (U1) protocol using Monte Carlo simulation. The dosimetric characteristics of Ir-125 including dose rate constant, radial dose function, 2D anisotropy function and 1D anisotropy function were determined inside liquid water using MCNPX code and compared to those of other commercially available iodine sources. The dose rate constant of this new source was found to be 0.983+0.015 cGyh-1U-1 that was in good agreement with the TLD measured data (0.965 cGyh-1U-1). The 1D anisotropy function at 3, 5, and 7 cm radial distances were obtained as 0.954, 0.953 and 0.959, respectively. The results of this study showed that the dosimetric characteristics of this new brachytherapy source are comparable with those of other commercially available sources. Furthermore, the simulated parameters were in accordance with the previously measured ones. Therefore, the Monte Carlo calculated dosimetric parameters could be employed to obtain the dose distribution around this new brachytherapy source based on TG-43 (U1) protocol.

  19. Comparison of fungal spores concentrations measured with wideband integrated bioaerosol sensor and Hirst methodology

    NASA Astrophysics Data System (ADS)

    Fernández-Rodríguez, S.; Tormo-Molina, R.; Lemonis, N.; Clot, B.; O'Connor, D. J.; Sodeau, John R.

    2018-02-01

    The aim of this work was to provide both a comparison of traditional and novel methodologies for airborne spores detection (i.e. the Hirst Burkard trap and WIBS-4) and the first quantitative study of airborne fungal concentrations in Payerne (Western Switzerland) as well as their relation to meteorological parameters. From the traditional method -Hirst trap and microscope analysis-, sixty-three propagule types (spores, sporangia and hyphae) were identified and the average spore concentrations measured over the full period amounted to 4145 ± 263.0 spores/m3. Maximum values were reached on July 19th and on August 6th. Twenty-six spore types reached average levels above 10 spores/m3. Airborne fungal propagules in Payerne showed a clear seasonal pattern, increasing from low values in early spring to maxima in summer. Daily average concentrations above 5000 spores/m3 were almost constant in summer from mid-June onwards. Weather parameters showed a relevant role for determining the observed spore concentrations. Coniferous forest, dominant in the surroundings, may be a relevant source for airborne fungal propagules as their distribution and predominant wind directions are consistent with the origin. The comparison between the two methodologies used in this campaign showed remarkably consistent patterns throughout the campaign. A correlation coefficient of 0.9 (CI 0.76-0.96) was seen between the two over the time period for daily resolutions (Hirst trap and WIBS-4). This apparent co-linearity was seen to fall away once increased resolution was employed. However at higher resolutions upon removal of Cladosporium species from the total fungal concentrations (Hirst trap), an increased correlation coefficient was again noted between the two instruments (R = 0.81 with confidence intervals of 0.74 and 0.86).

  20. German dentists' websites on periodontitis have low quality of information.

    PubMed

    Schwendicke, Falk; Stange, Jörg; Stange, Claudia; Graetz, Christian

    2017-08-02

    The internet is an increasingly relevant source of health information. We aimed to assess the quality of German dentists' websites on periodontitis, hypothesizing that it was significantly associated with a number of practice-specific parameters. We searched four electronic search engines and included pages which were freely accessible, posted by a dental practice in Germany, and mentioned periodontal disease/therapy. Websites were assessed for (1) technical and functional aspects, (2) generic quality and risk of bias, (3) disease-specific information. For 1 and 2, validated tools (LIDA/DISCERN) were used for assessment. For 3, we developed a criterion catalogue encompassing items on etiologic and prognostic factors for periodontitis, the diagnostic and treatment process, and the generic chance of tooth retention in periodontitis patients. Inter- and intra-rater reliabilities were largely moderate. Generalized linear modeling was used to assess the association between the information quality (measured as % of maximally available scores) and practice-specific characteristics. Seventy-one websites were included. Technical and functional aspects were reported in significantly higher quality (median: 71%, 25/75th percentiles: 67/79%) than all other aspects (p < 0.05). Generic risk of bias and most disease-specific aspects showed significantly lower reporting quality (median range was 0-40%), with poorest reporting for prognostic factors (9;0/27%), diagnostic process (0;0/33%) and chances of tooth retention (0;0/2%). We found none of the practice-specific parameters to have significant impact on the overall quality of the websites. Most German dentists' websites on periodontitis are not fully trustworthy and relevant information are not or insufficiently considered. There is great need to improve the information quality from such websites at least with regards to periodontitis.

  1. On Green's function retrieval by iterative substitution of the coupled Marchenko equations

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Vasconcelos, Ivan; Wapenaar, Kees

    2015-11-01

    Iterative substitution of the coupled Marchenko equations is a novel methodology to retrieve the Green's functions from a source or receiver array at an acquisition surface to an arbitrary location in an acoustic medium. The methodology requires as input the single-sided reflection response at the acquisition surface and an initial focusing function, being the time-reversed direct wavefield from the acquisition surface to a specified location in the subsurface. We express the iterative scheme that is applied by this methodology explicitly as the successive actions of various linear operators, acting on an initial focusing function. These operators involve multidimensional crosscorrelations with the reflection data and truncations in time. We offer physical interpretations of the multidimensional crosscorrelations by subtracting traveltimes along common ray paths at the stationary points of the underlying integrals. This provides a clear understanding of how individual events are retrieved by the scheme. Our interpretation also exposes some of the scheme's limitations in terms of what can be retrieved in case of a finite recording aperture. Green's function retrieval is only successful if the relevant stationary points are sampled. As a consequence, internal multiples can only be retrieved at a subsurface location with a particular ray parameter if this location is illuminated by the direct wavefield with this specific ray parameter. Several assumptions are required to solve the Marchenko equations. We show that these assumptions are not always satisfied in arbitrary heterogeneous media, which can result in incomplete Green's function retrieval and the emergence of artefacts. Despite these limitations, accurate Green's functions can often be retrieved by the iterative scheme, which is highly relevant for seismic imaging and inversion of internal multiple reflections.

  2. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  3. How Source Information Shapes Lay Interpretations of Science Conflicts: Interplay between Sourcing, Conflict Explanation, Source Evaluation, and Claim Evaluation

    ERIC Educational Resources Information Center

    Thomm, Eva; Bromme, Rainer

    2016-01-01

    When laypeople read controversial scientific information in order to make a personally relevant decision, information on the source is a valuable resource with which to evaluate multiple, competing claims. Due to their bounded understanding, laypeople rely on the expertise of others and need to identify whether sources are credible. The present…

  4. Influences of meteorological parameters on indoor radon concentrations (222Rn) excluding the effects of forced ventilation and radon exhalation from soil and building materials.

    PubMed

    Schubert, Michael; Musolff, Andreas; Weiss, Holger

    2018-06-13

    Elevated indoor radon concentrations ( 222 Rn) in dwellings pose generally a potential health risk to the inhabitants. During the last decades a considerable number of studies discussed both the different sources of indoor radon and the drivers for diurnal and multi day variations of its concentration. While the potential sources are undisputed, controversial opinions exist regarding their individual relevance and regarding the driving influences that control varying radon indoor concentrations. These drivers include (i) cyclic forced ventilation of dwellings, (ii) the temporal variance of the radon exhalation from soil and building materials due to e.g. a varying moisture content and (iii) diurnal and multi day temperature and pressure patterns. The presented study discusses the influences of last-mentioned temporal meteorological parameters by effectively excluding the influences of forced ventilation and undefined radon exhalation. The results reveal the continuous variation of the indoor/outdoor pressure gradient as key driver for a constant "breathing" of any interior space, which affects the indoor radon concentration with both diurnal and multi day patterns. The diurnally recurring variation of the pressure gradient is predominantly triggered by the day/night cycle of the indoor temperature that is associated with an expansion/contraction of the indoor air volume. Multi day patterns, on the other hand, are mainly due to periods of negative air pressure indoors that is triggered by periods of elevated wind speeds as a result of Bernoulli's principle. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Dynamic modelling of five different phytoplankton groups in the River Thames (UK)

    NASA Astrophysics Data System (ADS)

    Bussi, Gianbattista; Whitehead, Paul; Bowes, Michael; Read, Daniel; Dadson, Simon

    2015-04-01

    Phytoplankton play a vital role in fluvial ecosystems, being a major producer of organic carbon, a food source for primary consumers and a relevant source of oxygen for many low-gradient rivers, but also a producer of potentially harmful toxins (e.g. cyanobacteria). For these reasons, the forecast and prevention of algal blooms is fundamental for the safe management of river systems. In this study, we developed a new process-based phytoplankton model for operational management and forecast of algal and cyanobacteria blooms subject to environmental change. The model is based on a mass-balance and it reproduces phytoplankton growth and death, taking into account the controlling effect played by water temperature, solar radiation, self-shading and dissolved phosphorus and silicon concentrations. The model was implemented in five reaches of the River Thames (UK) with a daily time step over a period of three years, and its results were compared to a novel dataset of cytometric data which includes community cell abundance of chlorophytes, diatoms, cyanobacteria, microcystis-like cyanobacteria and picoalgae. The model results were satisfactory in terms of fitting the observed data. A Multi-Objective General Sensitivity Analysis was also carried out in order to quantify model sensitivity to its parameters. It showed that the most influential parameters are phytoplankton growth and death rates, while phosphorus concentration showed little influence on phytoplankton growth, due to the high levels of phosphorus in the River Thames. The model was demonstrated to be a reliable tool to be used in algal bloom forecasting and management.

  6. Moving NASA Remote Sensing Data to the GIS Environment for Health Studies

    NASA Technical Reports Server (NTRS)

    Vicente, Gilberto A.; Maynard, Nancy G.

    2003-01-01

    There has been an increasing demand by the health community for improved data on many different environmental factors relevant to the links between the environment and disease occurrence and transmission. These data are important for GIS-based monitoring, risk mapping, and surveillance of epidemiological parameters on a large number of different spatial, temporal, and spectral resolutions. Accordingly, NASA is developing new approaches to data collection and distribution in order to improve access to multiple sources of data streams to increase spatial and temporal coverage. Methods are being developed to incorporate different, scalable capabilities to handle multiple data sources by adding, deleting and replacing components as required as well as associated tools for their management. An approach has been to search for innovative solutions focused on the creation, use and manipulation of data stored in many different archives. These include data transformation and combination as well as data and information tools that can assist the public health and science community to use existing and anticipated products in new and flexible ways. This presentation will provide an inventory of geophysical parameters derived from satellite remote sensing sensors that are useful for GIS-based public health studies. The presentation will also discuss the physical and scientific limitations of access to and use of these data for health applications such as resolution and format differences, lack of software interoperability, data access problems. Finally, there will be a summary of the recent steps the NASA program has taken to bring NASA-generated satellite products to a wider range of users in the GIS community.

  7. Transition to a Source with Modified Physical Parameters by Energy Supply or Using an External Force

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.

    2017-11-01

    A study has been made of the possibility for the physical parameters of a source/sink, i.e., for the enthalpy, temperature, total pressure, maximum velocity, and minimum dimension, at a constant radial Mach number to be changed by energy or force action on the gas in a bounded zone. It has been shown that the parameters can be controlled at a subsonic, supersonic, and transonic (sonic in the limit) radial Mach number. In the updated source/sink, all versions of a vortex-source combination can be implemented: into a vacuum, out of a vacuum, into a submerged space, and out of a submerged space, partially or fully.

  8. Comparative Study of Light Sources for Household

    NASA Astrophysics Data System (ADS)

    Pawlak, Andrzej; Zalesińska, Małgorzata

    2017-03-01

    The article describes test results that provided the ground to define and evaluate basic photometric, colorimetric and electric parameters of selected, widely available light sources, which are equivalent to a traditional incandescent 60-Watt light bulb. Overall, one halogen light bulb, three compact fluorescent lamps and eleven LED light sources were tested. In general, it was concluded that in most cases (branded products, in particular) the measured and calculated parameters differ from the values declared by manufacturers only to a small degree. LED sources prove to be the most beneficial substitute for traditional light bulbs, considering both their operational parameters and their price, which is comparable with the price of compact fluorescent lamps or, in some instances, even lower.

  9. Integrated multi-parameters Probabilistic Seismic Landslide Hazard Analysis (PSLHA): the case study of Ischia island, Italy

    NASA Astrophysics Data System (ADS)

    Caccavale, Mauro; Matano, Fabio; Sacchi, Marco; Mazzola, Salvatore; Somma, Renato; Troise, Claudia; De Natale, Giuseppe

    2014-05-01

    The Ischia island is a large, complex, partly submerged, active volcanic field located about 20 km east to the Campi Flegrei, a major active volcano-tectonic area near Naples. The island is morphologically characterized in its central part by the resurgent block of Mt. Epomeo, controlled by NW-SE and NE-SW trending fault systems, by mountain stream basin with high relief energy and by a heterogeneous coastline with alternation of beach and tuff/lava cliffs in a continuous reshape due to the weather and sea erosion. The volcano-tectonic process is a main factor for slope stability, as it produces seismic activity and generated steep slopes in volcanic deposits (lava, tuff, pumice and ash layers) characterized by variable strength. In the Campi Flegrei and surrounding areas the possible occurrence of a moderate/large seismic event represents a serious threat for the inhabitants, for the infrastructures as well as for the environment. The most relevant seismic sources for Ischia are represented by the Campi Flegrei caldera and a 5 km long fault located below the island north coast. However those sources are difficult to constrain. The first one due to the on-shore and off-shore extension not yet completely defined. The second characterized only by few large historical events is difficult to parameterize in the framework of probabilistic hazard approach. The high population density, the presence of many infrastructures and the more relevant archaeological sites associated with the natural and artistic values, makes this area a strategic natural laboratory to develop new methodologies. Moreover Ischia represents the only sector, in the Campi Flegrei area, with documented historical landslides originated by earthquake, allowing for the possibility of testing the adequacy and stability of the method. In the framework of the Italian project MON.I.C.A (infrastructural coastlines monitoring) an innovative and dedicated probabilistic methodology has been applied to identify the areas with higher susceptibility of landslide occurrence due to the seismic effect. The (PSLHA) combines the probability of exceedance maps for different GM parameters with the geological and geomorphological information, in terms of critical acceleration and dynamic stability factor. Generally the maps are evaluated for Peak Ground Acceleration, Velocity or Intensity, are well related with anthropic infrastructures (e.g. streets, building, etc.). Each ground motion parameter represents a different aspect in the hazard and has a different correlation with the generation of possible damages. Many works pointed out that other GM like Arias and Housner intensity and the absolute displacement could represent a better choice to analyse for example the cliffs stability. The selection of the GM parameter is of crucial importance to obtain the most useful hazard maps. However in the last decades different Ground Motion Prediction Equations for a new set of GM parameters have been published. Based on this information a series of landslide hazard maps can be produced. The new maps will lead to the identification of areas with highest probability of landslide induced by an earthquake. In a strategic site like Ischia this new methodologies will represent an innovative and advanced tool for the landslide hazard mitigation.

  10. Probabilistic drug connectivity mapping

    PubMed Central

    2014-01-01

    Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351

  11. Synchronisation, acquisition and tracking for telemetry and data reception

    NASA Astrophysics Data System (ADS)

    Vandoninck, A.

    1992-06-01

    The important parameters of synchronization, acquisition, and tracking are addressed, and each function is highlighted separately. The following sequence is such as the functions occur in the system in time and for the type of data to be received, with distinction between telemetry and data reception, between direct carrier modulation or the use of a subcarrier, and between deep space and normal reception. For the telemetry reception the acquisition is described taking into account the difference in performances as geostationary or polar orbits, and the dependencies on the different Doppler offsets and rates are distinguished. The related functions and parameters are covered and the specifications of an average receiver are summarized. The synchronization of the valid data is described with a distinction for data directly modulated or via a subcarrier, the type of modulation and bitrate. The relevant functions and parameters of the average receiver/demodulator are summarized. The tracking of the signal in the course of the operational phase is described and relevant parameters of an actual system are presented. The reception of real data is handled and a sequence of acquisition, synchronization, and tracking is applied. Here higher bitrates and direct modulation schemes play an important role. The market equipment with the relevant parameters are discussed. The three functions in cases where deep reception is needed are covered. The high performance receiver/demodulator functions and how the acquisition, synchronization, and tracking is handled in such application, are explained.

  12. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  13. Correlation of iodine uptake and perfusion parameters between dual-energy CT imaging and first-pass dual-input perfusion CT in lung cancer.

    PubMed

    Chen, Xiaoliang; Xu, Yanyan; Duan, Jianghui; Li, Chuandong; Sun, Hongliang; Wang, Wu

    2017-07-01

    To investigate the potential relationship between perfusion parameters from first-pass dual-input perfusion computed tomography (DI-PCT) and iodine uptake levels estimated from dual-energy CT (DE-CT).The pre-experimental part of this study included a dynamic DE-CT protocol in 15 patients to evaluate peak arterial enhancement of lung cancer based on time-attenuation curves, and the scan time of DE-CT was determined. In the prospective part of the study, 28 lung cancer patients underwent whole-volume perfusion CT and single-source DE-CT using 320-row CT. Pulmonary flow (PF, mL/min/100 mL), aortic flow (AF, mL/min/100 mL), and a perfusion index (PI = PF/[PF + AF]) were automatically generated by in-house commercial software using the dual-input maximum slope method for DI-PCT. For the dual-energy CT data, iodine uptake was estimated by the difference (λ) and the slope (λHU). λ was defined as the difference of CT values between 40 and 70 KeV monochromatic images in lung lesions. λHU was calculated by the following equation: λHU = |λ/(70 - 40)|. The DI-PCT and DE-CT parameters were analyzed by Pearson/Spearman correlation analysis, respectively.All subjects were pathologically proved as lung cancer patients (including 16 squamous cell carcinoma, 8 adenocarcinoma, and 4 small cell lung cancer) by surgery or CT-guided biopsy. Interobserver reproducibility in DI-PCT (PF, AF, PI) and DE-CT (λ, λHU) were relatively good to excellent (intraclass correlation coefficient [ICC]Inter = 0.8726-0.9255, ICCInter = 0.8179-0.8842; ICCInter = 0.8881-0.9177, ICCInter = 0.9820-0.9970, ICCInter = 0.9780-0.9971, respectively). Correlation coefficient between λ and AF, and PF were as follows: 0.589 (P < .01) and 0.383 (P < .05). Correlation coefficient between λHU and AF, and PF were as follows: 0.564 (P < .01) and 0.388 (P < .05).Both the single-source DE-CT and dual-input CT perfusion analysis method can be applied to assess blood supply of lung cancer patients. Preliminary results demonstrated that the iodine uptake relevant parameters derived from DE-CT significantly correlated with perfusion parameters derived from DI-PCT.

  14. Stratification based on reproductive state reveals contrasting patterns of age-related variation in demographic parameters in the kittiwake

    USGS Publications Warehouse

    Cam, E.; Monnat, J.-Y.

    2000-01-01

    Heterogeneity in individual quality can be a major obstacle when interpreting age-specific variation in life-history traits. Heterogeneity is likely to lead to within-generation selection, and patterns observed at the population level may result from the combination of hidden patterns specific to subpopulations. Population-level patterns are not relevant to hypotheses concerning the evolution of age-specific reproductive strategies if they differ from patterns at the individual level. We addressed the influence of age and a variable used as a surrogate of quality (yearly reproductive state) on survival and breeding probability in the kittiwake. We found evidence of an effect of age and quality on both demographic parameters. Patterns observed in breeders are consistent with the selection hypothesis, which predicts age-related increases in survival and traits positively correlated with survival. Our results also reveal unexpected age effects specific to subgroups: the influence of age on survival and future breeding probability is not the same in nonbreeders and breeders. These patterns are observed in higher-quality breeding habitats, where the influence of extrinsic factors on breeding state is the weakest. Moreover, there is slight evidence of an influence of sex on breeding probability (not on survival), but the same overall pattern is observed in both sexes. Our results support the hypothesis that age-related variation in demographic parameters observed at the population level is partly shaped by heterogeneity among individuals. They also suggest processes specific to subpopulations. Recent theoreticaI developments lay emphasis on integration of sources of heterogeneity in optimization models to account for apparently 'sub-optimal' empirical patterns. Incorporation of sources of heterogeneity is also the key to investigation of age-related reproductive strategies in heterogeneous populations. Thwarting 'heterogeneity's ruses' has become a major challenge: for detecting and understanding natural processes, and a constructive confrontation between empirical and theoretical studies.

  15. aCGH-MAS: Analysis of aCGH by means of Multiagent System

    PubMed Central

    Benito, Rocío; Bajo, Javier; Rodríguez, Ana Eugenia; Abáigar, María

    2015-01-01

    There are currently different techniques, such as CGH arrays, to study genetic variations in patients. CGH arrays analyze gains and losses in different regions in the chromosome. Regions with gains or losses in pathologies are important for selecting relevant genes or CNVs (copy-number variations) associated with the variations detected within chromosomes. Information corresponding to mutations, genes, proteins, variations, CNVs, and diseases can be found in different databases and it would be of interest to incorporate information of different sources to extract relevant information. This work proposes a multiagent system to manage the information of aCGH arrays, with the aim of providing an intuitive and extensible system to analyze and interpret the results. The agent roles integrate statistical techniques to select relevant variations and visualization techniques for the interpretation of the final results and to extract relevant information from different sources of information by applying a CBR system. PMID:25874203

  16. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.

  17. Evaluation of realistic layouts for next generation on-scalp MEG: spatial information density maps.

    PubMed

    Riaz, Bushra; Pfeiffer, Christoph; Schneiderman, Justin F

    2017-08-01

    While commercial magnetoencephalography (MEG) systems are the functional neuroimaging state-of-the-art in terms of spatio-temporal resolution, MEG sensors have not changed significantly since the 1990s. Interest in newer sensors that operate at less extreme temperatures, e.g., high critical temperature (high-T c ) SQUIDs, optically-pumped magnetometers, etc., is growing because they enable significant reductions in head-to-sensor standoff (on-scalp MEG). Various metrics quantify the advantages of on-scalp MEG, but a single straightforward one is lacking. Previous works have furthermore been limited to arbitrary and/or unrealistic sensor layouts. We introduce spatial information density (SID) maps for quantitative and qualitative evaluations of sensor arrays. SID-maps present the spatial distribution of information a sensor array extracts from a source space while accounting for relevant source and sensor parameters. We use it in a systematic comparison of three practical on-scalp MEG sensor array layouts (based on high-T c SQUIDs) and the standard Elekta Neuromag TRIUX magnetometer array. Results strengthen the case for on-scalp and specifically high-T c SQUID-based MEG while providing a path for the practical design of future MEG systems. SID-maps are furthermore general to arbitrary magnetic sensor technologies and source spaces and can thus be used for design and evaluation of sensor arrays for magnetocardiography, magnetic particle imaging, etc.

  18. Global inverse modeling of CH4 sources and sinks: an overview of methods

    NASA Astrophysics Data System (ADS)

    Houweling, Sander; Bergamaschi, Peter; Chevallier, Frederic; Heimann, Martin; Kaminski, Thomas; Krol, Maarten; Michalak, Anna M.; Patra, Prabir

    2017-01-01

    The aim of this paper is to present an overview of inverse modeling methods that have been developed over the years for estimating the global sources and sinks of CH4. It provides insight into how techniques and estimates have evolved over time and what the remaining shortcomings are. As such, it serves a didactical purpose of introducing apprentices to the field, but it also takes stock of developments so far and reflects on promising new directions. The main focus is on methodological aspects that are particularly relevant for CH4, such as its atmospheric oxidation, the use of methane isotopologues, and specific challenges in atmospheric transport modeling of CH4. The use of satellite retrievals receives special attention as it is an active field of methodological development, with special requirements on the sampling of the model and the treatment of data uncertainty. Regional scale flux estimation and attribution is still a grand challenge, which calls for new methods capable of combining information from multiple data streams of different measured parameters. A process model representation of sources and sinks in atmospheric transport inversion schemes allows the integrated use of such data. These new developments are needed not only to improve our understanding of the main processes driving the observed global trend but also to support international efforts to reduce greenhouse gas emissions.

  19. Seismic velocity uncertainties and their effect on geothermal predictions: A case study

    NASA Astrophysics Data System (ADS)

    Rabbel, Wolfgang; Köhn, Daniel; Bahadur Motra, Hem; Niederau, Jan; Thorwart, Martin; Wuttke, Frank; Descramble Working Group

    2017-04-01

    Geothermal exploration relies in large parts on geophysical subsurface models derived from seismic reflection profiling. These models are the framework of hydro-geothermal modeling, which further requires estimating thermal and hydraulic parameters to be attributed to the seismic strata. All petrophysical and structural properties involved in this process can be determined only with limited accuracy and thus impose uncertainties onto the resulting model predictions of temperature-depth profiles and hydraulic flow, too. In the present study we analyze sources and effects of uncertainties of the seismic velocity field, which translate directly into depth uncertainties of the hydraulically and thermally relevant horizons. Geological sources of these uncertainties are subsurface heterogeneity and seismic anisotropy, methodical sources are limitations in spread length and physical resolution. We demonstrate these effects using data of the EU-Horizon 2020 project DESCRAMBLE investigating a shallow super-critical geothermal reservoir in the Larderello area. The study is based on 2D- and 3D seismic reflection data and laboratory measurements on representative rock samples under simulated in-situ conditions. The rock samples consistently show P-wave anisotropy values of 10-20% order of magnitude. However, the uncertainty of layer depths induced by anisotropy is likely to be lower depending on the accuracy, with which the spatial orientation of bedding planes can be determined from the seismic reflection images.

  20. The design of a low-cost Thomson Scattering system for use on the ORNL PhIX device

    NASA Astrophysics Data System (ADS)

    Biewer, T. M.; Lore, J.; Goulding, R. H.; Hillis, D. L.; Owen, L.; Rapp, J.

    2012-10-01

    Study of the plasma-material interface (PMI) under high power and particle flux on linear plasma devices is an active area of research that is relevant to fusion-grade toroidal devices such as ITER and DEMO. ORNL is assembling a 15 cm diameter, ˜3 m long linear machine, called the Physics Integration eXperiment (PhIX), which incorporates a helicon plasma source, electron heating, and a material target. The helicon source has demonstrated coupling of up to 100 kW of rf power, and produced ne >= 4 x 10^19 m-3 in D, and He fueled plasmas, measured with interferometry and Langmuir probes (LP). Optical emission spectroscopy was used to confirm LP measurements that Te is about 10 eV in helicon heated plasmas, which will presumably increase when electron heating is applied. Plasma parameters (ne, Te, n0) of the PhIX device will be measured with a novel, low-cost Thomson Scattering (TS) system. The data will be used to characterize the PMI regime with multiple profile measurements in front of the target. Profiles near the source and target will be used to determine the parallel transport regime via comparison to 2D fluid plasma simulations. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  1. Neural population encoding and decoding of sound source location across sound level in the rabbit inferior colliculus

    PubMed Central

    Delgutte, Bertrand

    2015-01-01

    At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292

  2. Prevalence of Potential and Clinically Relevant Statin-Drug Interactions in Frail and Robust Older Inpatients.

    PubMed

    Thai, Michele; Hilmer, Sarah; Pearson, Sallie-Anne; Reeve, Emily; Gnjidic, Danijela

    2015-10-01

    A significant proportion of older people are prescribed statins and are also exposed to polypharmacy, placing them at increased risk of statin-drug interactions. To describe the prevalence rates of potential and clinically relevant statin-drug interactions in older inpatients according to frailty status. A cross-sectional study of patients aged ≥65 years who were prescribed a statin and were admitted to a teaching hospital between 30 July and 10 October 2014 in Sydney, Australia, was conducted. Data on socio-demographics, comorbidities and medications were collected using a standardized questionnaire. Potential statin-drug interactions were defined if listed in the Australian Medicines Handbook and three international drug information sources: the British National Formulary, Drug Interaction Facts and Drug-Reax(®). Clinically relevant statin-drug interactions were defined as interactions with the highest severity rating in at least two of the three international drug information sources. Frailty was assessed using the Reported Edmonton Frail Scale. A total of 180 participants were recruited (median age 78 years, interquartile range 14), 35.0% frail and 65.0% robust. Potential statin-drug interactions were identified in 10% of participants, 12.7% of frail participants and 8.5% of robust participants. Clinically relevant statin-drug interactions were identified in 7.8% of participants, 9.5% of frail participants and 6.8% of robust participants. Depending on the drug information source used, the prevalence rates of potential and clinically relevant statin-drug interactions ranged between 14.4 and 35.6% and between 14.4 and 20.6%, respectively. In our study of frail and robust older inpatients taking statins, the overall prevalence of potential statin-drug interactions was low and varied significantly according to the drug information source used.

  3. Sensitivity of a Bayesian atmospheric-transport inversion model to spatio-temporal sensor resolution applied to the 2006 North Korean nuclear test

    NASA Astrophysics Data System (ADS)

    Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.

    2017-12-01

    Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.

  4. On-line Bayesian model updating for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo

    2018-03-01

    Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.

  5. Stability of high-speed boundary layers in oxygen including chemical non-equilibrium effects

    NASA Astrophysics Data System (ADS)

    Klentzman, Jill; Tumin, Anatoli

    2013-11-01

    The stability of high-speed boundary layers in chemical non-equilibrium is examined. A parametric study varying the edge temperature and the wall conditions is conducted for boundary layers in oxygen. The edge Mach number and enthalpy ranges considered are relevant to the flight conditions of reusable hypersonic cruise vehicles. Both viscous and inviscid stability formulations are used and the results compared to gain insight into the effects of viscosity and thermal conductivity on the stability. It is found that viscous effects have a strong impact on the temperature and mass fraction perturbations in the critical layer and in the viscous sublayer near the wall. Outside of these areas, the perturbations closely match in the viscous and inviscid models. The impact of chemical non-equilibrium on the stability is investigated by analyzing the effects of the chemical source term in the stability equations. The chemical source term is found to influence the growth rate of the second Mack mode instability but not have much of an effect on the mass fraction eigenfunction for the flow parameters considered. This work was supported by the AFOSR/NASA/National Center for Hypersonic Laminar-Turbulent Transition Research.

  6. 3D tumor spheroid models for in vitro therapeutic screening: a systematic approach to enhance the biological relevance of data obtained

    PubMed Central

    Zanoni, Michele; Piccinini, Filippo; Arienti, Chiara; Zamagni, Alice; Santi, Spartaco; Polico, Rolando; Bevilacqua, Alessandro; Tesei, Anna

    2016-01-01

    The potential of a spheroid tumor model composed of cells in different proliferative and metabolic states for the development of new anticancer strategies has been amply demonstrated. However, there is little or no information in the literature on the problems of reproducibility of data originating from experiments using 3D models. Our analyses, carried out using a novel open source software capable of performing an automatic image analysis of 3D tumor colonies, showed that a number of morphology parameters affect the response of large spheroids to treatment. In particular, we found that both spheroid volume and shape may be a source of variability. We also compared some commercially available viability assays specifically designed for 3D models. In conclusion, our data indicate the need for a pre-selection of tumor spheroids of homogeneous volume and shape to reduce data variability to a minimum before use in a cytotoxicity test. In addition, we identified and validated a cytotoxicity test capable of providing meaningful data on the damage induced in large tumor spheroids of up to diameter in 650 μm by different kinds of treatments. PMID:26752500

  7. Risk assessment for tephra dispersal and sedimentation: the example of four Icelandic volcanoes

    NASA Astrophysics Data System (ADS)

    Biass, Sebastien; Scaini, Chiara; Bonadonna, Costanza; Smith, Kate; Folch, Arnau; Höskuldsson, Armann; Galderisi, Adriana

    2014-05-01

    In order to assist the elaboration of proactive measures for the management of future Icelandic volcanic eruptions, we developed a new approach to assess the impact associated with tephra dispersal and sedimentation at various scales and for multiple sources. Target volcanoes are Hekla, Katla, Eyjafjallajökull and Askja, selected for their high probabilities of eruption and/or their high potential impact. We combined stratigraphic studies, probabilistic strategies and numerical modelling to develop comprehensive eruption scenarios and compile hazard maps for local ground deposition and regional atmospheric concentration using both TEPHRA2 and FALL3D models. New algorithms for the identification of comprehensive probability density functions of eruptive source parameters were developed for both short and long-lasting activity scenarios. A vulnerability assessment of socioeconomic and territorial aspects was also performed at both national and continental scales. The identification of relevant vulnerability indicators allowed for the identification of the most critical areas and territorial nodes. At a national scale, the vulnerability of economic activities and the accessibility to critical infrastructures was assessed. At a continental scale, we assessed the vulnerability of the main airline routes and airports. Resulting impact and risk were finally assessed by combining hazard and vulnerability analysis.

  8. Application of the MCNPX-McStas interface for shielding calculations and guide design at ESS

    NASA Astrophysics Data System (ADS)

    Klinkby, E. B.; Knudsen, E. B.; Willendrup, P. K.; Lauritzen, B.; Nonbøl, E.; Bentley, P.; Filges, U.

    2014-07-01

    Recently, an interface between the Monte Carlo code MCNPX and the neutron ray-tracing code MCNPX was developed [1, 2]. Based on the expected neutronic performance and guide geometries relevant for the ESS, the combined MCNPX-McStas code is used to calculate dose rates along neutron beam guides. The generation and moderation of neutrons is simulated using a full scale MCNPX model of the ESS target monolith. Upon entering the neutron beam extraction region, the individual neutron states are handed to McStas via the MCNPX-McStas interface. McStas transports the neutrons through the beam guide, and by using newly developed event logging capability, the neutron state parameters corresponding to un-reflected neutrons are recorded at each scattering. This information is handed back to MCNPX where it serves as neutron source input for a second MCNPX simulation. This simulation enables calculation of dose rates in the vicinity of the guide. In addition the logging mechanism is employed to record the scatterings along the guides which is exploited to simulate the supermirror quality requirements (i.e. m-values) needed at different positions along the beam guide to transport neutrons in the same guide/source setup.

  9. Review of laser-driven ion sources and their applications.

    PubMed

    Daido, Hiroyuki; Nishiuchi, Mamiko; Pirozhkov, Alexander S

    2012-05-01

    For many years, laser-driven ion acceleration, mainly proton acceleration, has been proposed and a number of proof-of-principle experiments have been carried out with lasers whose pulse duration was in the nanosecond range. In the 1990s, ion acceleration in a relativistic plasma was demonstrated with ultra-short pulse lasers based on the chirped pulse amplification technique which can provide not only picosecond or femtosecond laser pulse duration, but simultaneously ultra-high peak power of terawatt to petawatt levels. Starting from the year 2000, several groups demonstrated low transverse emittance, tens of MeV proton beams with a conversion efficiency of up to several percent. The laser-accelerated particle beams have a duration of the order of a few picoseconds at the source, an ultra-high peak current and a broad energy spectrum, which make them suitable for many, including several unique, applications. This paper reviews, firstly, the historical background including the early laser-matter interaction studies on energetic ion acceleration relevant to inertial confinement fusion. Secondly, we describe several implemented and proposed mechanisms of proton and/or ion acceleration driven by ultra-short high-intensity lasers. We pay special attention to relatively simple models of several acceleration regimes. The models connect the laser, plasma and proton/ion beam parameters, predicting important features, such as energy spectral shape, optimum conditions and scalings under these conditions for maximum ion energy, conversion efficiency, etc. The models also suggest possible ways to manipulate the proton/ion beams by tailoring the target and irradiation conditions. Thirdly, we review experimental results on proton/ion acceleration, starting with the description of driving lasers. We list experimental results and show general trends of parameter dependences and compare them with the theoretical predictions and simulations. The fourth topic includes a review of scientific, industrial and medical applications of laser-driven proton or ion sources, some of which have already been established, while the others are yet to be demonstrated. In most applications, the laser-driven ion sources are complementary to the conventional accelerators, exhibiting significantly different properties. Finally, we summarize the paper.

  10. TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB

    2016-06-15

    Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  11. EpiContactTrace: an R-package for contact tracing during livestock disease outbreaks and for risk-based surveillance.

    PubMed

    Nöremark, Maria; Widgren, Stefan

    2014-03-17

    During outbreak of livestock diseases, contact tracing can be an important part of disease control. Animal movements can also be of relevance for risk-based surveillance and sampling, i.e. both when assessing consequences of introduction or likelihood of introduction. In many countries, animal movement data are collected with one of the major objectives to enable contact tracing. However, often an analytical step is needed to retrieve appropriate information for contact tracing or surveillance. In this study, an open source tool was developed to structure livestock movement data to facilitate contact-tracing in real time during disease outbreaks and for input in risk-based surveillance and sampling. The tool, EpiContactTrace, was written in the R-language and uses the network parameters in-degree, out-degree, ingoing contact chain and outgoing contact chain (also called infection chain), which are relevant for forward and backward tracing respectively. The time-frames for backward and forward tracing can be specified independently and search can be done on one farm at a time or for all farms within the dataset. Different outputs are available; datasets with network measures, contacts visualised in a map and automatically generated reports for each farm either in HTML or PDF-format intended for the end-users, i.e. the veterinary authorities, regional disease control officers and field-veterinarians. EpiContactTrace is available as an R-package at the R-project website (http://cran.r-project.org/web/packages/EpiContactTrace/). We believe this tool can help in disease control since it rapidly can structure essential contact information from large datasets. The reproducible reports make this tool robust and independent of manual compilation of data. The open source makes it accessible and easily adaptable for different needs.

  12. Enhancing Information Awareness Through Directed Qualification of Semantic Relevancy Scoring Operations

    DTIC Science & Technology

    2014-06-01

    analytics to evaluate document relevancy and order query results. 4 Background • Information environment complexity • Relevancy solutions for big data ...027 Primary Topic: Data , Information and Knowledge Alternatives: Organizational Concepts and Approaches; Experimentation, Metrics, and Analysis...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send

  13. Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application

    NASA Astrophysics Data System (ADS)

    Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.

    2017-12-01

    This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and discuss the resolution of interesting and/or important recovered source properties.

  14. Laser-ablation-based ion source characterization and manipulation for laser-driven ion acceleration

    NASA Astrophysics Data System (ADS)

    Sommer, P.; Metzkes-Ng, J.; Brack, F.-E.; Cowan, T. E.; Kraft, S. D.; Obst, L.; Rehwald, M.; Schlenvoigt, H.-P.; Schramm, U.; Zeil, K.

    2018-05-01

    For laser-driven ion acceleration from thin foils (∼10 μm–100 nm) in the target normal sheath acceleration regime, the hydro-carbon contaminant layer at the target surface generally serves as the ion source and hence determines the accelerated ion species, i.e. mainly protons, carbon and oxygen ions. The specific characteristics of the source layer—thickness and relevant lateral extent—as well as its manipulation have both been investigated since the first experiments on laser-driven ion acceleration using a variety of techniques from direct source imaging to knife-edge or mesh imaging. In this publication, we present an experimental study in which laser ablation in two fluence regimes (low: F ∼ 0.6 J cm‑2, high: F ∼ 4 J cm‑2) was applied to characterize and manipulate the hydro-carbon source layer. The high-fluence ablation in combination with a timed laser pulse for particle acceleration allowed for an estimation of the relevant source layer thickness for proton acceleration. Moreover, from these data and independently from the low-fluence regime, the lateral extent of the ion source layer became accessible.

  15. Influence of source parameters on the growth of metal nanoparticles by sputter-gas-aggregation

    NASA Astrophysics Data System (ADS)

    Khojasteh, Malak; Kresin, Vitaly V.

    2017-11-01

    We describe the production of size-selected manganese nanoclusters using a magnetron sputtering/aggregation source. Since nanoparticle production is sensitive to a range of overlapping operating parameters (in particular, the sputtering discharge power, the inert gas flow rates, and the aggregation length), we focus on a detailed map of the influence of each parameter on the average nanocluster size. In this way, it is possible to identify the main contribution of each parameter to the physical processes taking place within the source. The discharge power and argon flow supply the metal vapor, and argon also plays a crucial role in the formation of condensation nuclei via three-body collisions. However, the argon flow and the discharge power have a relatively weak effect on the average nanocluster size in the exiting beam. Here the defining role is played by the source residence time, governed by the helium supply (which raises the pressure and density of the gas column inside the source, resulting in more efficient transport of nanoparticles to the exit) and by the aggregation path length.

  16. Accuracy of assessing the level of impulse sound from distant sources.

    PubMed

    Wszołek, Tadeusz; Kłaczyński, Maciej

    2007-01-01

    Impulse sound events are characterised by ultra high pressures and low frequencies. Lower frequency sounds are generally less attenuated over a given distance in the atmosphere than higher frequencies. Thus, impulse sounds can be heard over greater distances and will be more affected by the environment. To calculate a long-term average immission level it is necessary to apply weighting factors like the probability of the occurrence of each weather condition during the relevant time period. This means that when measuring impulse noise at a long distance it is necessary to follow environmental parameters in many points along the way sound travels and also to have a database of sound transfer functions in the long term. The paper analyses the uncertainty of immission measurement results of impulse sound from cladding and destroying explosive materials. The influence of environmental conditions on the way sound travels is the focus of this paper.

  17. Processing methods, characteristics and adsorption behavior of tire derived carbons: a review.

    PubMed

    Saleh, Tawfik A; Gupta, Vinod Kumar

    2014-09-01

    The remarkable increase in the number of vehicles worldwide; and the lack of both technical and economical mechanisms of disposal make waste tires to be a serious source of pollution. One potential recycling process is pyrolysis followed by chemical activation process to produce porous activated carbons. Many researchers have recently proved the capability of such carbons as adsorbents to remove various types of pollutants including organic and inorganic species. This review attempts to compile relevant knowledge about the production methods of carbon from waste rubber tires. The effects of various process parameters including temperature and heating rate, on the pyrolysis stage; activation temperature and time, activation agent and activating gas are reviewed. This review highlights the use of waste-tires derived carbon to remove various types of pollutants like heavy metals, dye, pesticides and others from aqueous media. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Gas Core Reactor Numerical Simulation Using a Coupled MHD-MCNP Model

    NASA Technical Reports Server (NTRS)

    Kazeminezhad, F.; Anghaie, S.

    2008-01-01

    Analysis is provided in this report of using two head-on magnetohydrodynamic (MHD) shocks to achieve supercritical nuclear fission in an axially elongated cylinder filled with UF4 gas as an energy source for deep space missions. The motivation for each aspect of the design is explained and supported by theory and numerical simulations. A subsequent report will provide detail on relevant experimental work to validate the concept. Here the focus is on the theory of and simulations for the proposed gas core reactor conceptual design from the onset of shock generations to the supercritical state achieved when the shocks collide. The MHD model is coupled to a standard nuclear code (MCNP) to observe the neutron flux and fission power attributed to the supercritical state brought about by the shock collisions. Throughout the modeling, realistic parameters are used for the initial ambient gaseous state and currents to ensure a resulting supercritical state upon shock collisions.

  19. Modeling Electric Field Influences on Plasmaspheric Refilling

    NASA Technical Reports Server (NTRS)

    Liemohn, M. W.; Kozyra, J. U.; Khazanov, G. V.; Craven, Paul D.

    1998-01-01

    We have a new model of ion transport that we have applied to the problem of plasmaspheric flux tube refilling after a geomagnetic disturbance. This model solves the Fokker-Planck kinetic equation by applying discrete difference numerical schemes to the various operators. Features of the model include a time-varying ionospheric source, self-consistent Coulomb collisions, field-aligned electric field, hot plasma interactions, and ion cyclotron wave heating. We see refilling rates similar to those of earlier observations and models, except when the electric field is included. In this case, the refilling rates can be quite different that previously predicted. Depending on the populations included and the values of relevant parameters, trap zone densities can increase or decrease. In particular, the inclusion of hot populations near the equatorial region (specifically warm pancake distributions and ring current ions) can dramatically alter the refilling rate. Results are compared with observations as well as previous hydrodynamic and kinetic particle model simulations.

  20. Development and Evaluation of a Social Media Health Intervention to Improve Adolescents’ Knowledge About and Vaccination Against the Human Papillomavirus

    PubMed Central

    Shafer, Autumn; Cates, Joan; Coyne-Beasley, Tamera

    2018-01-01

    This study describes the formative research, execution, and evaluation of a social media health intervention to improve adolescents’ knowledge about and vaccination against human papillomavirus (HPV). Based on the results from formative focus groups with adolescents (N = 38) to determine intervention feasibility, parameters, and message preferences, we developed and conducted a pretest/posttest evaluation of a 3-month social media health intervention for adolescents who had not completed the HPV vaccine series (N = 108). Results revealed that adolescents who fully engaged with the intervention improved in their knowledge compared with a control group, and many were also likely to have interpersonal discussions with others about what they learned. Adolescents are generally interested in receiving information about HPV and the vaccine, along with other relevant health information, through social media channels if messages are considered interesting, their privacy is protected, and the source is credible. PMID:29872667

  1. Science Concierge: A Fast Content-Based Recommendation System for Scientific Publications.

    PubMed

    Achakulvisut, Titipat; Acuna, Daniel E; Ruangrong, Tulakan; Kording, Konrad

    2016-01-01

    Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate.

  2. Observations of potassium in the tenuous lunar atmosphere

    NASA Technical Reports Server (NTRS)

    Kozlowski, Richard W. H.; Sprague, Ann L.; Hunten, Donald M.

    1990-01-01

    Observations of neutral potassium (K) in the tenuous lunar atmosphere are described. An echelle spectrograph, CCD, and data acquisition system are used to obtain emission spectra of neutral K atoms in the lunar atmosphere as observed by the 1.54 telescope at the Catalina Observatory at first quarter the night of April 29, 1989. A table of relevant lunar atmosphere parameters summarizes the results of the investigation. It is found that the number density at the surface is 9.5 + or - 1 atoms per cu cm and that there is a large nonthermal component and a deficiency of atoms equilibrated to the surface temperature. The calculated thermalization rate of the nonthermal component through encounters with the lunar surface gives a source of strength for the thermal component factor of 7 greater than loss by photoionization. Possible explanations for the low thermalized population observed are considered.

  3. A feasibility study of X-ray phase-contrast mammographic tomography at the Imaging and Medical beamline of the Australian Synchrotron.

    PubMed

    Nesterets, Yakov I; Gureyev, Timur E; Mayo, Sheridan C; Stevenson, Andrew W; Thompson, Darren; Brown, Jeremy M C; Kitchen, Marcus J; Pavlov, Konstantin M; Lockie, Darren; Brun, Francesco; Tromba, Giuliana

    2015-11-01

    Results are presented of a recent experiment at the Imaging and Medical beamline of the Australian Synchrotron intended to contribute to the implementation of low-dose high-sensitivity three-dimensional mammographic phase-contrast imaging, initially at synchrotrons and subsequently in hospitals and medical imaging clinics. The effect of such imaging parameters as X-ray energy, source size, detector resolution, sample-to-detector distance, scanning and data processing strategies in the case of propagation-based phase-contrast computed tomography (CT) have been tested, quantified, evaluated and optimized using a plastic phantom simulating relevant breast-tissue characteristics. Analysis of the data collected using a Hamamatsu CMOS Flat Panel Sensor, with a pixel size of 100 µm, revealed the presence of propagation-based phase contrast and demonstrated significant improvement of the quality of phase-contrast CT imaging compared with conventional (absorption-based) CT, at medically acceptable radiation doses.

  4. Science Concierge: A Fast Content-Based Recommendation System for Scientific Publications

    PubMed Central

    Achakulvisut, Titipat; Acuna, Daniel E.; Ruangrong, Tulakan; Kording, Konrad

    2016-01-01

    Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate. PMID:27383424

  5. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  6. Assessment of the content, structure, and source of soil dissolved organic matter in the coastal wetlands of Jiaozhou Bay, China

    NASA Astrophysics Data System (ADS)

    Xi, Min; Zi, Yuanyuan; Wang, Qinggai; Wang, Sen; Cui, Guolu; Kong, Fanlong

    2018-02-01

    The contents and the spectral analysis of dissolved organic matter (DOM) in four typical wetlands, such as naked tidal, suaeda salsa, reed and spartina, were conducted to investigate the content, structure, and source of DOM in coastal wetland soil. The soil samples were obtained from Jiaozhou Bay in January, April, July, and October of 2014. Results showed that the DOM contents in soil of four typical wetland were in order of spartina wetland > naked tidal > suaeda salsa wetland > reed wetland in horizontal direction, and decreased with the increase of soil depth on vertical section. In addition, the DOM contents changed with the seasons, in order of spring > summer > autumn > winter. The structural characteristics of DOM in Jiaozhou Bay wetland, such as aromaticity, hydrophobicity, molecular weight, polymerization degree of benzene ring carbon frame structure and so on were in order of spartina wetland > naked tidal > suaeda salsa wetland > reed wetland in the horizontal direction. On the vertical direction, they showed a decreasing trend with the increase of soil depth. The results of three dimensional fluorescence spectra and fluorescence spectrum parameters (FI, HIX, and BIX) indicated that the DOM in Jiaozhou Bay was mainly derived from the biological activities. The contents and structure of DOM had certain relevance, but the contents and source as well as the structure and source of DOM had no significant correlation. The external pollution including domestic sewage, industrial wastewater, and aquaculture sewage affected the correlation among the content, structure and source of DOM by influencing the percentage of non-fluorescent substance in DOM and disturbing the determination of protein-like fluorescence.

  7. AtomPy: an open atomic-data curation environment

    NASA Astrophysics Data System (ADS)

    Bautista, Manuel; Mendoza, Claudio; Boswell, Josiah S; Ajoku, Chukwuemeka

    2014-06-01

    We present a cloud-computing environment for atomic data curation, networking among atomic data providers and users, teaching-and-learning, and interfacing with spectral modeling software. The system is based on Google-Drive Sheets, Pandas (Python Data Analysis Library) DataFrames, and IPython Notebooks for open community-driven curation of atomic data for scientific and technological applications. The atomic model for each ionic species is contained in a multi-sheet Google-Drive workbook, where the atomic parameters from all known public sources are progressively stored. Metadata (provenance, community discussion, etc.) accompanying every entry in the database are stored through Notebooks. Education tools on the physics of atomic processes as well as their relevance to plasma and spectral modeling are based on IPython Notebooks that integrate written material, images, videos, and active computer-tool workflows. Data processing workflows and collaborative software developments are encouraged and managed through the GitHub social network. Relevant issues this platform intends to address are: (i) data quality by allowing open access to both data producers and users in order to attain completeness, accuracy, consistency, provenance and currentness; (ii) comparisons of different datasets to facilitate accuracy assessment; (iii) downloading to local data structures (i.e. Pandas DataFrames) for further manipulation and analysis by prospective users; and (iv) data preservation by avoiding the discard of outdated sets.

  8. Antioxidant and Angiotensin-Converting Enzyme Inhibitory Activity of Eucalyptus camaldulensis and Litsea glaucescens Infusions Fermented with Kombucha Consortium.

    PubMed

    Gamboa-Gómez, Claudia I; González-Laredo, Rubén F; Gallegos-Infante, José Alberto; Pérez, Mş Del Mar Larrosa; Moreno-Jiménez, Martha R; Flores-Rueda, Ana G; Rocha-Guzmán, Nuria E

    2016-09-01

    Physicochemical properties, consumer acceptance, antioxidant and angiotensin-converting enzyme (ACE) inhibitory activities of infusions and fermented beverages of Eucalyptus camaldulensis and Litsea glaucescens were compared. Among physicochemical parameters, only the pH of fermented beverages decreased compared with the unfermented infusions. No relevant changes were reported in consumer preference between infusions and fermented beverages. Phenolic profile measured by UPLC MS/MS analysis demonstrated significant concentration changes of these compounds in plant infusions and fermented beverages. Fermentation induced a decrease in the concentration required to stabilize 50% of DPPH radical ( i . e . lower IC 50 ). Additionally, it enhanced the antioxidant activity measured by the nitric oxide scavenging assay (14% of E. camaldulensis and 49% of L. glaucescens ); whereas relevant improvements in the fermented beverage were not observed in the lipid oxidation assay compared with unfermented infusions. The same behaviour was observed in the inhibitory activity of ACE; however, both infusions and fermented beverages had lower IC 50 than positive control (captopril). The present study demonstrated that fermentation has an influence on the concentration of phenolics and their potential bioactivity. E. camaldulensis and L. glaucescens can be considered as natural sources of biocompounds with antihypertensive potential used either as infusions or fermented beverages.

  9. Antioxidant and Angiotensin-Converting Enzyme Inhibitory Activity of Eucalyptus camaldulensis and Litsea glaucescens Infusions Fermented with Kombucha Consortium

    PubMed Central

    Gamboa-Gómez, Claudia I.; González-Laredo, Rubén F.; Gallegos-Infante, José Alberto; Pérez, MŞ del Mar Larrosa; Moreno-Jiménez, Martha R.; Flores-Rueda, Ana G.

    2016-01-01

    Summary Physicochemical properties, consumer acceptance, antioxidant and angiotensin-converting enzyme (ACE) inhibitory activities of infusions and fermented beverages of Eucalyptus camaldulensis and Litsea glaucescens were compared. Among physicochemical parameters, only the pH of fermented beverages decreased compared with the unfermented infusions. No relevant changes were reported in consumer preference between infusions and fermented beverages. Phenolic profile measured by UPLC MS/MS analysis demonstrated significant concentration changes of these compounds in plant infusions and fermented beverages. Fermentation induced a decrease in the concentration required to stabilize 50% of DPPH radical (i.e. lower IC50). Additionally, it enhanced the antioxidant activity measured by the nitric oxide scavenging assay (14% of E. camaldulensis and 49% of L. glaucescens); whereas relevant improvements in the fermented beverage were not observed in the lipid oxidation assay compared with unfermented infusions. The same behaviour was observed in the inhibitory activity of ACE; however, both infusions and fermented beverages had lower IC50 than positive control (captopril). The present study demonstrated that fermentation has an influence on the concentration of phenolics and their potential bioactivity. E. camaldulensis and L. glaucescens can be considered as natural sources of biocompounds with antihypertensive potential used either as infusions or fermented beverages. PMID:27956869

  10. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    NASA Astrophysics Data System (ADS)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  11. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  12. Ground Truth Events with Source Geometry in Eurasia and the Middle East

    DTIC Science & Technology

    2016-06-02

    source properties, including seismic moment, corner frequency, radiated energy , and stress drop have been obtained using spectra for S waves following...PARAMETERS Other source parameters, including radiated energy , corner frequency, seismic moment, and static stress drop were calculated using a spectral...technique (Richardson & Jordan, 2002; Andrews, 1986). The process entails separating event and station spectra and median- stacking each event’s

  13. Mining large heterogeneous data sets in drug discovery.

    PubMed

    Wild, David J

    2009-10-01

    Increasingly, effective drug discovery involves the searching and data mining of large volumes of information from many sources covering the domains of chemistry, biology and pharmacology amongst others. This has led to a proliferation of databases and data sources relevant to drug discovery. This paper provides a review of the publicly-available large-scale databases relevant to drug discovery, describes the kinds of data mining approaches that can be applied to them and discusses recent work in integrative data mining that looks for associations that pan multiple sources, including the use of Semantic Web techniques. The future of mining large data sets for drug discovery requires intelligent, semantic aggregation of information from all of the data sources described in this review, along with the application of advanced methods such as intelligent agents and inference engines in client applications.

  14. Method for acquiring, storing and analyzing crystal images

    NASA Technical Reports Server (NTRS)

    Gester, Thomas E. (Inventor); Rosenblum, William M. (Inventor); Christopher, Gayle K. (Inventor); Hamrick, David T. (Inventor); Delucas, Lawrence J. (Inventor); Tillotson, Brian (Inventor)

    2003-01-01

    A system utilizing a digital computer for acquiring, storing and evaluating crystal images. The system includes a video camera (12) which produces a digital output signal representative of a crystal specimen positioned within its focal window (16). The digitized output from the camera (12) is then stored on data storage media (32) together with other parameters inputted by a technician and relevant to the crystal specimen. Preferably, the digitized images are stored on removable media (32) while the parameters for different crystal specimens are maintained in a database (40) with indices to the digitized optical images on the other data storage media (32). Computer software is then utilized to identify not only the presence and number of crystals and the edges of the crystal specimens from the optical image, but to also rate the crystal specimens by various parameters, such as edge straightness, polygon formation, aspect ratio, surface clarity, crystal cracks and other defects or lack thereof, and other parameters relevant to the quality of the crystals.

  15. Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.

    PubMed

    Lima, Clodoaldo A M; Coelho, André L V

    2011-10-01

    We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Parameter estimation for compact binary coalescence signals with the first generation gravitational-wave detector network

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2013-09-01

    Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance, that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a “blind injection” where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron-star and black-hole binary parameter space over the component mass range 1M⊙-25M⊙ and the full range of spin parameters. The cases reported in this study provide a snapshot of the status of parameter estimation in preparation for the operation of advanced detectors.

  17. Jewish Studies: A Guide to Reference Sources.

    ERIC Educational Resources Information Center

    McGill Univ., Montreal (Quebec). McLennan Library.

    An annotated bibliography to the reference sources for Jewish Studies in the McLennan Library of McGill University (Canada) is presented. Any titles in Hebrew characters are listed by their transliterated equivalents. There is also a list of relevant Library of Congress Subject Headings. General reference sources listed are: encyclopedias,…

  18. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  19. Characterizing CO and NOy Sources and Relative Ambient Ratios in the Baltimore Area Using Ambient Measurements and Source Attribution Modeling

    EPA Science Inventory

    Modeled source attribution information from the Community Multiscale Air Quality model was coupled with ambient data from the 2011 Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality Baltimore field study. We assess ...

  20. Inventory of Data Sources in Science and Technology. A Preliminary Survey.

    ERIC Educational Resources Information Center

    International Council of Scientific Unions, Paris (France).

    Provided in this inventory are sources of numerical or factual data in selected fields of basic science and applied science/technology. The objective of the inventory is to provide organizations and individuals (scientists, engineers, and information specialists), particularly those in developing countries, with basic data sources relevant to…

  1. Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data

    PubMed Central

    Wieseler, Beate; Wolfram, Natalia; McGauran, Natalie; Kerekes, Michaela F.; Vervölgyi, Volker; Kohlepp, Petra; Kamphuis, Marloes; Grouven, Ulrich

    2013-01-01

    Background Access to unpublished clinical study reports (CSRs) is currently being discussed as a means to allow unbiased evaluation of clinical research. The Institute for Quality and Efficiency in Health Care (IQWiG) routinely requests CSRs from manufacturers for its drug assessments. Our objective was to determine the information gain from CSRs compared to publicly available sources (journal publications and registry reports) for patient-relevant outcomes included in IQWiG health technology assessments (HTAs) of drugs. Methods and Findings We used a sample of 101 trials with full CSRs received for 16 HTAs of drugs completed by IQWiG between 15 January 2006 and 14 February 2011, and analyzed the CSRs and the publicly available sources of these trials. For each document type we assessed the completeness of information on all patient-relevant outcomes included in the HTAs (benefit outcomes, e.g., mortality, symptoms, and health-related quality of life; harm outcomes, e.g., adverse events). We dichotomized the outcomes as “completely reported” or “incompletely reported.” For each document type, we calculated the proportion of outcomes with complete information per outcome category and overall. We analyzed 101 trials with CSRs; 86 had at least one publicly available source, 65 at least one journal publication, and 50 a registry report. The trials included 1,080 patient-relevant outcomes. The CSRs provided complete information on a considerably higher proportion of outcomes (86%) than the combined publicly available sources (39%). With the exception of health-related quality of life (57%), CSRs provided complete information on 78% to 100% of the various benefit outcomes (combined publicly available sources: 20% to 53%). CSRs also provided considerably more information on harms. The differences in completeness of information for patient-relevant outcomes between CSRs and journal publications or registry reports (or a combination of both) were statistically significant for all types of outcomes. The main limitation of our study is that our sample is not representative because only CSRs provided voluntarily by pharmaceutical companies upon request could be assessed. In addition, the sample covered only a limited number of therapeutic areas and was restricted to randomized controlled trials investigating drugs. Conclusions In contrast to CSRs, publicly available sources provide insufficient information on patient-relevant outcomes of clinical trials. CSRs should therefore be made publicly available. Please see later in the article for the Editors' Summary PMID:24115912

  2. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  3. PHOTOMETRIC ORBITS OF EXTRASOLAR PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Robert A.

    We define and analyze the photometric orbit (PhO) of an extrasolar planet observed in reflected light. In our definition, the PhO is a Keplerian entity with six parameters: semimajor axis, eccentricity, mean anomaly at some particular time, argument of periastron, inclination angle, and effective radius, which is the square root of the geometric albedo times the planetary radius. Preliminarily, we assume a Lambertian phase function. We study in detail the case of short-period giant planets (SPGPs) and observational parameters relevant to the Kepler mission: 20 ppm photometry with normal errors, 6.5 hr cadence, and three-year duration. We define a relevantmore » 'planetary population of interest' in terms of probability distributions of the PhO parameters. We perform Monte Carlo experiments to estimate the ability to detect planets and to recover PhO parameters from light curves. We calibrate the completeness of a periodogram search technique, and find structure caused by degeneracy. We recover full orbital solutions from synthetic Kepler data sets and estimate the median errors in recovered PhO parameters. We treat in depth a case of a Jupiter body-double. For the stated assumptions, we find that Kepler should obtain orbital solutions for many of the 100-760 SPGP that Jenkins and Doyle estimate Kepler will discover. Because most or all of these discoveries will be followed up by ground-based radial velocity observations, the estimates of inclination angle from the PhO may enable the calculation of true companion masses: Kepler photometry may break the 'msin i' degeneracy. PhO observations may be difficult. There is uncertainty about how low the albedos of SPGPs actually are, about their phase functions, and about a possible noise floor due to systematic errors from instrumental and stellar sources. Nevertheless, simple detection of SPGPs in reflected light should be robust in the regime of Kepler photometry, and estimates of all six orbital parameters may be feasible in at least a subset of cases.« less

  4. The Magnetar Model for Type I Superluminous Supernovae. I. Bayesian Analysis of the Full Multicolor Light-curve Sample with MOSFiT

    NASA Astrophysics Data System (ADS)

    Nicholl, Matt; Guillochon, James; Berger, Edo

    2017-11-01

    We use the new Modular Open Source Fitter for Transients to model 38 hydrogen-poor superluminous supernovae (SLSNe). We fit their multicolor light curves with a magnetar spin-down model and present posterior distributions of magnetar and ejecta parameters. The color evolution can be fit with a simple absorbed blackbody. The medians (1σ ranges) for key parameters are spin period 2.4 ms (1.2-4 ms), magnetic field 0.8× {10}14 G (0.2{--}1.8× {10}14 G), ejecta mass 4.8 {M}⊙ (2.2-12.9 {M}⊙ ), and kinetic energy 3.9× {10}51 erg (1.9{--}9.8× {10}51 erg). This significantly narrows the parameter space compared to our uninformed priors, showing that although the magnetar model is flexible, the parameter space relevant to SLSNe is well constrained by existing data. The requirement that the instantaneous engine power is ˜1044 erg at the light-curve peak necessitates either large rotational energy (P < 2 ms), or more commonly that the spin-down and diffusion timescales be well matched. We find no evidence for separate populations of fast- and slow-declining SLSNe, which instead form a continuum in light-curve widths and inferred parameters. Variations in the spectra are explained through differences in spin-down power and photospheric radii at maximum light. We find no significant correlations between model parameters and host galaxy properties. Comparing our posteriors to stellar evolution models, we show that SLSNe require rapidly rotating (fastest 10%) massive stars (≳ 20 {M}⊙ ), which is consistent with their observed rate. High mass, low metallicity, and likely binary interaction all serve to maintain rapid rotation essential for magnetar formation. By reproducing the full set of light curves, our posteriors can inform photometric searches for SLSNe in future surveys.

  5. Monitoring of conditions inside gas aggregation cluster source during production of Ti/TiOx nanoparticles

    NASA Astrophysics Data System (ADS)

    Kousal, J.; Kolpaková, A.; Shelemin, A.; Kudrna, P.; Tichý, M.; Kylián, O.; Hanuš, J.; Choukourov, A.; Biederman, H.

    2017-10-01

    Gas aggregation sources are nowadays rather widely used in the research community for producing nanoparticles. However, the direct diagnostics of conditions inside the source are relatively scarce. In this work, we focused on monitoring the plasma parameters and the composition of the gas during the production of the TiOx nanoparticles. We studied the role of oxygen in the aggregation process and the influence of the presence of the particles on the plasma. The construction of the source allowed us to make a 2D map of the plasma parameters inside the source.

  6. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  7. Spectroscopic characterization of low dose rate brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Beach, Stephen M.

    The low dose rate (LDR) brachytherapy seeds employed in permanent radioactive-source implant treatments usually use one of two radionuclides, 125I or 103Pd. The theoretically expected source spectroscopic output from these sources can be obtained via Monte Carlo calculation based upon seed dimensions and materials as well as the bare-source photon emissions for that specific radionuclide. However the discrepancies resulting from inconsistent manufacturing of sources in comparison to each other within model groups and simplified Monte Carlo calculational geometries ultimately result in undesirably large uncertainties in the Monte Carlo calculated values. This dissertation describes experimentally attained spectroscopic outputs of the clinically used brachytherapy sources in air and in liquid water. Such knowledge can then be applied to characterize these sources by a more fundamental and metro logically-pure classification, that of energy-based dosimetry. The spectroscopic results contained within this dissertation can be utilized in the verification and benchmarking of Monte Carlo calculational models of these brachytherapy sources. This body of work was undertaken to establish a usable spectroscopy system and analysis methods for the meaningful study of LDR brachytherapy seeds. The development of a correction algorithm and the analysis of the resultant spectroscopic measurements are presented. The characterization of the spectrometer and the subsequent deconvolution of the measured spectrum to obtain the true spectrum free of any perturbations caused by the spectrometer itself is an important contribution of this work. The approach of spectroscopic deconvolution that was applied in this work is derived in detail and it is applied to the physical measurements. In addition, the spectroscopically based analogs to the LDR dosimetry parameters that are currently employed are detailed, as well as the development of the theory and measurement methods to arrive at these analogs. Several dosimetrically-relevant water-equivalent plastics were also investigated for their transmission properties within a liquid water environment, as well as in air. The framework for the accurate spectrometry of LDR sources is established as a result of this dissertation work. In addition to the measurement and analysis methods, this work presents the basic measured spectroscopic characteristics of each LDR seed currently in use in the clinic today.

  8. Individual differences in spontaneous analogical transfer.

    PubMed

    Kubricht, James R; Lu, Hongjing; Holyoak, Keith J

    2017-05-01

    Research on analogical problem solving has shown that people often fail to spontaneously notice the relevance of a semantically remote source analog when solving a target problem, although they are able to form mappings and derive inferences when given a hint to recall the source. Relatively little work has investigated possible individual differences that predict spontaneous transfer, or how such differences may interact with interventions that facilitate transfer. In this study, fluid intelligence was measured for participants in an analogical problem-solving task, using an abridged version of the Raven's Progressive Matrices (RPM) test. In two experiments, we systematically compared the effect of augmenting verbal descriptions of the source with animations or static diagrams. Solution rates to Duncker's radiation problem were measured across varying source presentation conditions, and participants' understanding of the relevant source material was assessed. The pattern of transfer was best fit by a moderated mediation model: the positive impact of fluid intelligence on spontaneous transfer was mediated by its influence on source comprehension; however, this path was in turn modulated by provision of a supplemental animation via its influence on comprehension of the source. Animated source depictions were most beneficial in facilitating spontaneous transfer for those participants with low scores on the fluid intelligence measure.

  9. Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hock, Kiel; Earle, Keith

    2016-02-06

    In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.

  10. Progress in characterizing the multidimensional color quality properties of white LED light sources

    NASA Astrophysics Data System (ADS)

    Teunissen, Kees; Hoelen, Christoph

    2016-03-01

    With the introduction of solid state light sources, the variety in emission spectra is almost unlimited. However, the set of standardized parameters to characterize a white LED light source, such as correlated color temperature (CCT) and CIE general color rendering index (Ra), is known to be limited and insufficient for describing perceived differences between light sources. Several characterization methods have been proposed over the past decades, but their contribution to perceived color quality has not always been validated. To gain more insight in the relevant characteristics of the emission spectra for specific applications, we have conducted a perception experiment to rate the attractiveness of three sets of objects, including fresh food, packaging materials and skin tones. The objects were illuminated with seven different combinations of Red, Green, Blue, Amber and White LEDs, all with the same CCT and illumination level, but with differences in Ra and color saturation. The results show that, in general, object attractiveness does not correlate well with Ra, but shows a positive correlation with saturation increase for two out of three applications. There is no clear relation between saturation and skin tone attractiveness, partly due to differences in preference between males and females. A relative gamut area index (Ga) represents the average change in saturation and a complementary color vector graphic shows the direction and magnitude of chromatic differences for the eight CIE-1974 test-color samples. Together with the CIE general color rendering index (Ra) they provide useful information for designing and optimizing application specific emission spectra.

  11. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    PubMed

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  12. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    PubMed Central

    Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-01-01

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths. PMID:29462929

  13. SU-D-19A-05: The Dosimetric Impact of Using Xoft Axxent® Electronic Brachytherapy Source TG-43 Dosimetry Parameters for Treatment with the Xoft 30 Mm Diameter Vaginal Applicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simiele, S; Micka, J; Culberson, W

    2014-06-01

    Purpose: A full TG-43 dosimetric characterization has not been performed for the Xoft Axxent ® electronic brachytherapy source (Xoft, a subsidiary of iCAD, San Jose, CA) within the Xoft 30 mm diameter vaginal applicator. Currently, dose calculations are performed using the bare-source TG-43 parameters and do not account for the presence of the applicator. This work focuses on determining the difference between the bare-source and sourcein- applicator TG-43 parameters. Both the radial dose function (RDF) and polar anisotropy function (PAF) were computationally determined for the source-in-applicator and bare-source models to determine the impact of using the bare-source dosimetry data. Methods:more » MCNP5 was used to model the source and the Xoft 30 mm diameter vaginal applicator. All simulations were performed using 0.84p and 0.03e cross section libraries. All models were developed based on specifications provided by Xoft. The applicator is made of a proprietary polymer material and simulations were performed using the most conservative chemical composition. An F6 collision-kerma tally was used to determine the RDF and PAF values in water at various dwell positions. The RDF values were normalized to 2.0 cm from the source to accommodate the applicator radius. Source-in-applicator results were compared with bare-source results from this work as well as published baresource results. Results: For a 0 mm source pullback distance, the updated bare-source model and source-in-applicator RDF values differ by 2% at 3 cm and 4% at 5 cm. The largest PAF disagreements were observed at the distal end of the source and applicator with up to 17% disagreement at 2 cm and 8% at 8 cm. The bare-source model had RDF values within 2.6% of the published TG-43 data and PAF results within 7.2% at 2 cm. Conclusion: Results indicate that notable differences exist between the bare-source and source-in-applicator TG-43 simulated parameters. Xoft Inc. provided partial funding for this work.« less

  14. Role of the Environment in the Transmission of Antimicrobial Resistance to Humans: A Review.

    PubMed

    Huijbers, Patricia M C; Blaak, Hetty; de Jong, Mart C M; Graat, Elisabeth A M; Vandenbroucke-Grauls, Christina M J E; de Roda Husman, Ana Maria

    2015-10-20

    To establish a possible role for the natural environment in the transmission of clinically relevant AMR bacteria to humans, a literature review was conducted to systematically collect and categorize evidence for human exposure to extended-spectrum β-lactamase-producing Enterobacteriaceae, methicillin-resistant Staphylococcus aureus, and vancomycin-resistant Enterococcus spp. in the environment. In total, 239 datasets adhered to inclusion criteria. AMR bacteria were detected at exposure-relevant sites (35/38), including recreational areas, drinking water, ambient air, and shellfish, and in fresh produce (8/16). More datasets were available for environmental compartments (139/157), including wildlife, water, soil, and air/dust. Quantitative data from exposure-relevant sites (6/35) and environmental compartments (11/139) were scarce. AMR bacteria were detected in the contamination sources (66/66) wastewater and manure, and molecular data supporting their transmission from wastewater to the environment (1/66) were found. The abundance of AMR bacteria at exposure-relevant sites suggests risk for human exposure. Of publications pertaining to both environmental and human isolates, however, only one compared isolates from samples that had a clear spatial and temporal relationship, and no direct evidence was found for transmission to humans through the environment. To what extent the environment, compared to the clinical and veterinary domains, contributes to human exposure needs to be quantified. AMR bacteria in the environment, including sites relevant for human exposure, originate from contamination sources. Intervention strategies targeted at these sources could therefore limit emission of AMR bacteria to the environment.

  15. Identifying key sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2013-09-01

    This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Identification of Absorption, Distribution, Metabolism, and Excretion (ADME) Genes Relevant to Steatosis Using a Differential Gene Expression Approach

    EPA Science Inventory

    Absorption, distribution, metabolism, and excretion (ADME) parameters represent important connections between exposure to chemicals and the activation of molecular initiating events of Adverse Outcome Pathways (AOPs) in cellular, tissue, and organ level targets. ADME parameters u...

  17. Using HEC-HMS: Application to Karkheh river basin

    USDA-ARS?s Scientific Manuscript database

    This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...

  18. Shallow seismic source parameter determination using intermediate-period surface wave amplitude spectra

    NASA Astrophysics Data System (ADS)

    Fox, Benjamin D.; Selby, Neil D.; Heyburn, Ross; Woodhouse, John H.

    2012-09-01

    Estimating reliable depths for shallow seismic sources is important in both seismo-tectonic studies and in seismic discrimination studies. Surface wave excitation is sensitive to source depth, especially at intermediate and short-periods, owing to the approximate exponential decay of surface wave displacements with depth. A new method is presented here to retrieve earthquake source parameters from regional and teleseismic intermediate period (100-15 s) fundamental-mode surface wave recordings. This method makes use of advances in mapping global dispersion to allow higher frequency surface wave recordings at regional and teleseismic distances to be used with more confidence than in previous studies and hence improve the resolution of depth estimates. Synthetic amplitude spectra are generated using surface wave theory combined with a great circle path approximation, and a grid of double-couple sources are compared with the data. Source parameters producing the best-fitting amplitude spectra are identified by minimizing the least-squares misfit in logarithmic amplitude space. The F-test is used to search the solution space for statistically acceptable parameters and the ranges of these variables are used to place constraints on the best-fitting source. Estimates of focal mechanism, depth and scalar seismic moment are determined for 20 small to moderate sized (4.3 ≤Mw≤ 6.4) earthquakes. These earthquakes are situated across a wide range of geographic and tectonic locations and describe a range of faulting styles over the depth range 4-29 km. For the larger earthquakes, comparisons with other studies are favourable, however existing source determination procedures, such as the CMT technique, cannot be performed for the smaller events. By reducing the magnitude threshold at which robust source parameters can be determined, the accuracy, especially at shallow depths, of seismo-tectonic studies, seismic hazard assessments, and seismic discrimination investigations can be improved by the application of this methodology.

  19. Uncertainties in the 2004 Sumatra–Andaman source through nonlinear stochastic inversion of tsunami waves

    PubMed Central

    Venugopal, M.; Roy, D.; Rajendran, K.; Guillas, S.; Dias, F.

    2017-01-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra–Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems. PMID:28989311

  20. Uncertainties in the 2004 Sumatra-Andaman source through nonlinear stochastic inversion of tsunami waves.

    PubMed

    Gopinathan, D; Venugopal, M; Roy, D; Rajendran, K; Guillas, S; Dias, F

    2017-09-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra-Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems.

  1. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    NASA Astrophysics Data System (ADS)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.

  2. Crystal viscoplasticity model for the creep-fatigue interactions in single-crystal Ni-base superalloy CMSX-8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estrada Rodas, Ernesto A.; Neu, Richard W.

    A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less

  3. Crystal viscoplasticity model for the creep-fatigue interactions in single-crystal Ni-base superalloy CMSX-8

    DOE PAGES

    Estrada Rodas, Ernesto A.; Neu, Richard W.

    2017-09-11

    A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less

  4. Laser cutting: industrial relevance, process optimization, and laser safety

    NASA Astrophysics Data System (ADS)

    Haferkamp, Heinz; Goede, Martin; von Busse, Alexander; Thuerk, Oliver

    1998-09-01

    Compared to other technological relevant laser machining processes, up to now laser cutting is the application most frequently used. With respect to the large amount of possible fields of application and the variety of different materials that can be machined, this technology has reached a stable position within the world market of material processing. Reachable machining quality for laser beam cutting is influenced by various laser and process parameters. Process integrated quality techniques have to be applied to ensure high-quality products and a cost effective use of the laser manufacturing plant. Therefore, rugged and versatile online process monitoring techniques at an affordable price would be desirable. Methods for the characterization of single plant components (e.g. laser source and optical path) have to be substituted by an omnivalent control system, capable of process data acquisition and analysis as well as the automatic adaptation of machining and laser parameters to changes in process and ambient conditions. At the Laser Zentrum Hannover eV, locally highly resolved thermographic measurements of the temperature distribution within the processing zone using cost effective measuring devices are performed. Characteristic values for cutting quality and plunge control as well as for the optimization of the surface roughness at the cutting edges can be deducted from the spatial distribution of the temperature field and the measured temperature gradients. Main influencing parameters on the temperature characteristic within the cutting zone are the laser beam intensity and pulse duration in pulse operation mode. For continuous operation mode, the temperature distribution is mainly determined by the laser output power related to the cutting velocity. With higher cutting velocities temperatures at the cutting front increase, reaching their maximum at the optimum cutting velocity. Here absorption of the incident laser radiation is drastically increased due to the angle between the normal of the cutting front and the laser beam axis. Beneath process optimization and control further work is focused on the characterization of particulate and gaseous laser generated air contaminants and adequate safety precautions like exhaust and filter systems.

  5. A simulation study of spectral Čerenkov luminescence imaging for tumour margin estimation

    NASA Astrophysics Data System (ADS)

    Calvert, Nick; Helo, Yusef; Mertzanidou, Thomy; Tuch, David S.; Arridge, Simon R.; Stoyanov, Danail

    2017-03-01

    Breast cancer is the most common cancer in women in the world. Breast-conserving surgery (BCS) is a standard surgical treatment for breast cancer with the key objective of removing breast tissue, maintaining a negative surgical margin and providing a good cosmetic outcome. A positive surgical margin, meaning the presence of cancerous tissues on the surface of the breast specimen after surgery, is associated with local recurrence after therapy. In this study, we investigate a new imaging modality based on Cerenkov luminescence imaging (CLI) for the purpose of detecting positive surgical margins during BCS. We develop Monte Carlo (MC) simulations using the Geant4 nuclear physics simulation toolbox to study the spectrum of photons emitted given 18F-FDG and breast tissue properties. The resulting simulation spectra show that the CLI signal contains information that may be used to estimate whether the cancerous cells are at a depth of less than 1 mm or greater than 1 mm given appropriate imaging system design and sensitivity. The simulation spectra also show that when the source is located within 1 mm of the surface, the tissue parameters are not relevant to the model as the spectra do not vary significantly. At larger depths, however, the spectral information varies significantly with breast optical parameters, having implications for further studies and system design. While promising, further studies are needed to quantify the CLI response to more accurately incorporate tissue specific parameters and patient specific anatomical details.

  6. A strategy to establish Food Safety Model Repositories.

    PubMed

    Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M

    2015-07-02

    Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.

  7. Review of relationship between indoor and outdoor particles: I/O ratio, infiltration factor and penetration factor

    NASA Astrophysics Data System (ADS)

    Chen, Chun; Zhao, Bin

    2011-01-01

    Epidemiologic evidence indicates a relationship between outdoor particle exposure and adverse health effects, while most people spend 85-90% of their time indoors, thus understanding the relationship between indoor and outdoor particles is quite important. This paper aims to provide an up-to-date revision for both experiment and modeling on relationship between indoor and outdoor particles. The use of three different parameters: indoor/outdoor (I/O) ratio, infiltration factor and penetration factor, to assess the relationship between indoor and outdoor particles were reviewed. The experimental data of the three parameters measured both in real houses and laboratories were summarized and analyzed. The I/O ratios vary considerably due to the difference in size-dependent indoor particle emission rates, the geometry of the cracks in building envelopes, and the air exchange rates. Thus, it is difficult to draw uniform conclusions as detailed information, which make I/O ratio hardly helpful for understanding the indoor/outdoor relationship. Infiltration factor represents the equilibrium fraction of ambient particles that penetrates indoors and remains suspended, which avoids the mixture with indoor particle sources. Penetration factor is the most relevant parameter for the particle penetration mechanism through cracks and leaks in the building envelope. We investigate the methods used in previously published studies to both measure and model the infiltration and penetration factors. We also discuss the application of the penetration factor models and provide recommendations for improvement.

  8. Technical Guidance from the International Safety Framework for Nuclear Power Source Applications in Outer Space for Design and Development Phases

    NASA Astrophysics Data System (ADS)

    Summerer, Leopold

    2014-08-01

    In 2009, the International Safety Framework for Nuclear Power Source Applications in Outer Space [1] has been adopted, following a multi-year process that involved all major space faring nations in the frame of the International Atomic Energy Agency and the UN Committee on the Peaceful Uses of Outer Space. The safety framework reflects an international consensus on best practices. After the older 1992 Principles Relevant to the Use of Nuclear Power Sources in Outer Space, it is the second document at UN level dedicated entirely to space nuclear power sources.This paper analyses aspects of the safety framework relevant for the design and development phases of space nuclear power sources. While early publications have started analysing the legal aspects of the safety framework, its technical guidance has not yet been subject to scholarly articles. The present paper therefore focuses on the technical guidance provided in the safety framework, in an attempt to assist engineers and practitioners to benefit from these.

  9. Systematic and heuristic processing of majority and minority-endorsed messages: the effects of varying outcome relevance and levels of orientation on attitude and message processing.

    PubMed

    Martin, Robin; Hewstone, Miles; Martin, Pearl Y

    2007-01-01

    Two experiments investigated the conditions under which majority and minority sources instigate systematic processing of their messages. Both experiments crossed source status (majority vs. minority) with message quality (strong vs. weak arguments). In each experiment, message elaboration was manipulated by varying either motivational (outcome relevance, Experiment 1) or cognitive (orientating tasks, Experiment 2) factors. The results showed that when either motivational or cognitive factors encouraged low message elaboration, there was heuristic acceptance of the majority position without detailed message processing. When the level of message elaboration was intermediate, there was message processing only for the minority source. Finally, when message elaboration was high, there was message processing for both source conditions. These results show that majority and minority influence is sensitive to motivational and cognitive factors that constrain or enhance message elaboration and that both sources can lead to systematic processing under specific circumstances.

  10. Applying an information literacy rubric to first-year health sciences student research posters.

    PubMed

    Goodman, Xan; Watts, John; Arenas, Rogelio; Weigel, Rachelle; Terrell, Tony

    2018-01-01

    This article describes the collection and analysis of annotated bibliographies created by first-year health sciences students to support their final poster projects. The authors examined the students' abilities to select relevant and authoritative sources, summarize the content of those sources, and correctly cite those sources. We collected images of 1,253 posters, of which 120 were sampled for analysis, and scored the posters using a 4-point rubric to evaluate the students' information literacy skills. We found that 52% of students were proficient at selecting relevant sources that directly contributed to the themes, topics, or debates presented in their final poster projects, and 64% of students did well with selecting authoritative peer-reviewed scholarly sources related to their topics. However, 45% of students showed difficulty in correctly applying American Psychological Association (APA) citation style. Our findings demonstrate a need for instructors and librarians to provide strategies for reading and comprehending scholarly articles in addition to properly using APA citation style.

  11. Applying an information literacy rubric to first-year health sciences student research posters*

    PubMed Central

    Goodman, Xan; Watts, John; Arenas, Rogelio; Weigel, Rachelle; Terrell, Tony

    2018-01-01

    Objective This article describes the collection and analysis of annotated bibliographies created by first-year health sciences students to support their final poster projects. The authors examined the students’ abilities to select relevant and authoritative sources, summarize the content of those sources, and correctly cite those sources. Methods We collected images of 1,253 posters, of which 120 were sampled for analysis, and scored the posters using a 4-point rubric to evaluate the students’ information literacy skills. Results We found that 52% of students were proficient at selecting relevant sources that directly contributed to the themes, topics, or debates presented in their final poster projects, and 64% of students did well with selecting authoritative peer-reviewed scholarly sources related to their topics. However, 45% of students showed difficulty in correctly applying American Psychological Association (APA) citation style. Conclusion Our findings demonstrate a need for instructors and librarians to provide strategies for reading and comprehending scholarly articles in addition to properly using APA citation style. PMID:29339940

  12. Evaluating the Impact of Contaminant Dilution and Biodegradation in Uncertainty Quantification of Human Health Risk

    NASA Astrophysics Data System (ADS)

    Zarlenga, Antonio; de Barros, Felipe; Fiori, Aldo

    2016-04-01

    We present a probabilistic framework for assessing human health risk due to groundwater contamination. Our goal is to quantify how physical hydrogeological and biochemical parameters control the magnitude and uncertainty of human health risk. Our methodology captures the whole risk chain from the aquifer contamination to the tap water assumption by human population. The contaminant concentration, the key parameter for the risk estimation, is governed by the interplay between the large-scale advection, caused by heterogeneity and the degradation processes strictly related to the local scale dispersion processes. The core of the hazard identification and of the methodology is the reactive transport model: erratic displacement of contaminant in groundwater, due to the spatial variability of hydraulic conductivity (K), is characterized by a first-order Lagrangian stochastic model; different dynamics are considered as possible ways of biodegradation in aerobic and anaerobic conditions. With the goal of quantifying uncertainty, the Beta distribution is assumed for the concentration probability density function (pdf) model, while different levels of approximation are explored for the estimation of the one-point concentration moments. The information pertaining the flow and transport is connected with a proper dose response assessment which generally involves the estimation of physiological parameters of the exposed population. Human health response depends on the exposed individual metabolism (e.g. variability) and is subject to uncertainty. Therefore, the health parameters are intrinsically a stochastic. As a consequence, we provide an integrated in a global probabilistic human health risk framework which allows the propagation of the uncertainty from multiple sources. The final result, the health risk pdf, is expressed as function of a few relevant, physically-based parameters such as the size of the injection area, the Péclet number, the K structure metrics and covariance shape, reaction parameters pertaining to aerobic and anaerobic degradation processes respectively as well as the dose response parameters. Even though the final result assumes a relatively simple form, few numerical quadratures are required in order to evaluate the trajectory moments of the solute plume. In order to perform a sensitivity analysis we apply the methodology to a hypothetical case study. The scenario investigated is made by an aquifer which constitutes a water supply for a population where a continuous source of NAPL contaminant feeds a steady plume. The risk analysis is limited to carcinogenic compounds for which the well-known linear relation for human risk is assumed. Analysis performed shows few interesting findings: the risk distribution is strictly dependent on the pore scale dynamics that trigger dilution and mixing; biodegradation may involve a significant reduction of the risk.

  13. A longitudinal study on the information needs and preferences of patients after an acute coronary syndrome.

    PubMed

    Greco, Andrea; Cappelletti, Erika Rosa; Monzani, Dario; Pancani, Luca; D'Addario, Marco; Magrin, Maria Elena; Miglioretti, Massimo; Sarini, Marcello; Scrignaro, Marta; Vecchio, Luca; Fattirolli, Francesco; Steca, Patrizia

    2016-09-20

    Research has shown that the provision of pertinent health information to patients with cardiovascular disease is associated with better adherence to medical prescriptions, behavioral changes, and enhanced perception of control over the disease. Yet there is no clear knowledge on how to improve information pertinence. Identifying and meeting the information needs of patients and their preferences for sources of information is pivotal to developing patient-led services. This prospective, observational study was aimed at exploring the information needs and perceived relevance of different information sources for patients during the twenty-four months following an acute coronary syndrome. Two hundred and seventeen newly diagnosed patients with acute coronary syndrome were enrolled in the study. The patients were primarily men (83.41 %) with a mean age of 57.28 years (range 35-75; SD = 7.98). Patients' needs for information and the perceived relevance of information sources were evaluated between 2 and 8 weeks after hospitalization (baseline) and during three follow-ups at 6, 12 and 24 months after baseline. Repeated measures ANOVA, Bonferroni post hoc tests and Cochran's Q Test were performed to test differences in variables of interest over time. Results showed a reduction in information needs, but this decrease was significant only for topics related to daily activities, behavioral habits, risk and complication. At baseline, the primary sources of information were specialists and general practitioners, followed by family members and information leaflets given by physicians. Relevance of other sources changed differently over time. The present longitudinal study is an original contribution to the investigation of changes in information needs and preferences for sources of information among patients who are diagnosed with acute coronary syndrome. One of the main results of this study is that information on self-disease management is perceived as a minor theme for patients even two years after the event. Knowledge on how patients' information needs and perceived relevance of information sources change over time could enhance the quality of chronic disease management, leading health-care systems to move toward more patient-tailored care.

  14. The International Safety Framework for nuclear power source applications in outer space-Useful and substantial guidance

    NASA Astrophysics Data System (ADS)

    Summerer, L.; Wilcox, R. E.; Bechtel, R.; Harbison, S.

    2015-06-01

    In 2009, the International Safety Framework for Nuclear Power Source Applications in Outer Space was adopted, following a multi-year process that involved all major space faring nations under the auspices of a partnership between the UN Committee on the Peaceful Uses of Outer Space and the International Atomic Energy Agency. The Safety Framework reflects an international consensus on best practices to achieve safety. Following the 1992 UN Principles Relevant to the Use of Nuclear Power Sources in Outer Space, it is the second attempt by the international community to draft guidance promoting the safety of applications of nuclear power sources in space missions. NPS applications in space have unique safety considerations compared with terrestrial applications. Mission launch and outer space operational requirements impose size, mass and other space environment limitations not present for many terrestrial nuclear facilities. Potential accident conditions could expose nuclear power sources to extreme physical conditions. The Safety Framework is structured to provide guidance for both the programmatic and technical aspects of safety. In addition to sections containing specific guidance for governments and for management, it contains technical guidance pertinent to the design, development and all mission phases of space NPS applications. All sections of the Safety Framework contain elements directly relevant to engineers and space mission designers for missions involving space nuclear power sources. The challenge for organisations and engineers involved in the design and development processes of space nuclear power sources and applications is to implement the guidance provided in the Safety Framework by integrating it into the existing standard space mission infrastructure of design, development and operational requirements, practices and processes. This adds complexity to the standard space mission and launch approval processes. The Safety Framework is deliberately generic to remain relevantly independent of technological progress, of national organisational setups and of space mission types. Implementing its guidance therefore leaves room for interpretation and adaptation. Relying on reported practices, we analyse the guidance particularly relevant to engineers and space mission designers.

  15. Determination of Destress Blasting Effectiveness Using Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Wojtecki, Łukasz; Mendecki, Maciej J.; Zuberek, Wacaław M.

    2017-12-01

    Underground mining of coal seams in the Upper Silesian Coal Basin is currently performed under difficult geological and mining conditions. The mining depth, dislocations (faults and folds) and mining remnants are responsible for rockburst hazard in the highest degree. This hazard can be minimized by using active rockburst prevention, where destress blastings play an important role. Destress blastings in coal seams aim to destress the local stress concentrations. These blastings are usually performed from the longwall face to decrease the stress level ahead of the longwall. An accurate estimation of active rockburst prevention effectiveness is important during mining under disadvantageous geological and mining conditions, which affect the risk of rockburst. Seismic source parameters characterize the focus of tremor, which may be useful in estimating the destress blasting effects. Investigated destress blastings were performed in coal seam no. 507 during its longwall mining in one of the coal mines in the Upper Silesian Coal Basin under difficult geological and mining conditions. The seismic source parameters of the provoked tremors were calculated. The presented preliminary investigations enable a rapid estimation of the destress blasting effectiveness using seismic source parameters, but further analysis in other geological and mining conditions with other blasting parameters is required.

  16. Constructing Ebola transmission chains from West Africa and estimating model parameters using internet sources.

    PubMed

    Pettey, W B P; Carter, M E; Toth, D J A; Samore, M H; Gundlapalli, A V

    2017-07-01

    During the recent Ebola crisis in West Africa, individual person-level details of disease onset, transmissions, and outcomes such as survival or death were reported in online news media. We set out to document disease transmission chains for Ebola, with the goal of generating a timely account that could be used for surveillance, mathematical modeling, and public health decision-making. By accessing public web pages only, such as locally produced newspapers and blogs, we created a transmission chain involving two Ebola clusters in West Africa that compared favorably with other published transmission chains, and derived parameters for a mathematical model of Ebola disease transmission that were not statistically different from those derived from published sources. We present a protocol for responsibly gleaning epidemiological facts, transmission model parameters, and useful details from affected communities using mostly indigenously produced sources. After comparing our transmission parameters to published parameters, we discuss additional benefits of our method, such as gaining practical information about the affected community, its infrastructure, politics, and culture. We also briefly compare our method to similar efforts that used mostly non-indigenous online sources to generate epidemiological information.

  17. A Citation Analysis of Psychology Students' Use of Sources in Online Distance Learning

    ERIC Educational Resources Information Center

    Weaver, Nancy Evans; Barnard, Estelle

    2015-01-01

    Reference lists from two assignments of psychology students in university-level online distance learning (ODL) were analyzed for number and type of sources and mark achieved. Most referenced were resources relevant to the assignment and provided by instructors. Use changed across assignments: Instructor sources were used more on the first…

  18. Controlling Nonpoint-Source Water Pollution: A Citizen's Handbook.

    ERIC Educational Resources Information Center

    Hansen, Nancy Richardson; And Others

    Citizens can play an important role in helping their states develop pollution control programs and spurring effective efforts to deal with nonpoint-source pollution. This guide takes the reader step-by-step through the process that states must follow to comply with water quality legislation relevant to nonpoint-source pollution. Part I provides…

  19. Evaluating Internet and Scholarly Sources across the Disciplines: Two Case Studies

    ERIC Educational Resources Information Center

    Calkins, Susanna; Kelley, Matthew R.

    2007-01-01

    Although most college faculty expect their students to analyze Internet and scholarly sources in a critical and responsible manner, recent research suggests that many undergraduates are unable to discriminate between credible and noncredible sources, in part because they lack the proper training and relevant experiences. The authors describe two…

  20. Enamel surface topography analysis for diet discrimination. A methodology to enhance and select discriminative parameters

    NASA Astrophysics Data System (ADS)

    Francisco, Arthur; Blondel, Cécile; Brunetière, Noël; Ramdarshan, Anusha; Merceron, Gildas

    2018-03-01

    Tooth wear and, more specifically, dental microwear texture is a dietary proxy that has been used for years in vertebrate paleoecology and ecology. DMTA, dental microwear texture analysis, relies on a few parameters related to the surface complexity, anisotropy and heterogeneity of the enamel facets at the micrometric scale. Working with few but physically meaningful parameters helps in comparing published results and in defining levels for classification purposes. Other dental microwear approaches are based on ISO parameters and coupled with statistical tests to find the more relevant ones. The present study roughly utilizes most of the aforementioned parameters in their more or less modified form. But more than parameters, we here propose a new approach: instead of a single parameter characterizing the whole surface, we sample the surface and thus generate 9 derived parameters in order to broaden the parameter set. The identification of the most discriminative parameters is performed with an automated procedure which is an extended and refined version of the workflows encountered in some studies. The procedure in its initial form includes the most common tools, like the ANOVA and the correlation analysis, along with the required mathematical tests. The discrimination results show that a simplified form of the procedure is able to more efficiently identify the desired number of discriminative parameters. Also highlighted are some trends like the relevance of working with both height and spatial parameters, as well as the potential benefits of dimensionless surfaces. On a set of 45 surfaces issued from 45 specimens of three modern ruminants with differences in feeding preferences (grazing, leaf-browsing and fruit-eating), it is clearly shown that the level of wear discrimination is improved with the new methodology compared to the other ones.

  1. Evaluation for relationship among source parameters of underground nuclear tests in Northern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Kim, G.; Che, I. Y.

    2017-12-01

    We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.

  2. Post-Newtonian parameters γ and β of scalar-tensor gravity for a homogeneous gravitating sphere

    NASA Astrophysics Data System (ADS)

    Hohmann, Manuel; Schärer, Andreas

    2017-11-01

    We calculate the parameters γ and β in the parametrized post-Newtonian (PPN) formalism for scalar-tensor gravity (STG) with an arbitrary potential, under the assumption that the source matter is given by a nonrotating sphere of constant density, pressure, and internal energy. For our calculation we write the STG field equations in a form which is manifestly invariant under conformal transformations of the metric and redefinitions of the scalar field. This easily shows that also the obtained PPN parameters are invariant under such transformations. Our result is consistent with the expectation that STG is a fully conservative theory, i.e., only γ and β differ from their general relativity values γ =β =1 , which indicates the absence of preferred frame and preferred location effects. We find that the values of the PPN parameters depend on both the radius of the gravitating mass source and the distance between the source and the observer. Most interestingly, we find that also at large distances from the source β does not approach β =1 , but receives corrections due to a modified gravitational self-energy of the source. Finally, we compare our result to a number of measurements of γ and β in the Solar System. We find that in particular measurements of β improve the previously obtained bounds on the theory parameters, due to the aforementioned long-distance corrections.

  3. Electrochemical Power Sources for Electric Highway Vehicles

    DOT National Transportation Integrated Search

    1972-06-01

    The report summarizes an assessment of electrochemical power sources (batteries and fuel cells) which are relevant to electric vehicle propulsion. A very brief description of each type of cell is given along with its present level of research.

  4. Dynamic Source Inversion of a M6.5 Intraslab Earthquake in Mexico: Application of a New Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.

    2013-05-01

    We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico

  5. FUB at TREC 2008 Relevance Feedback Track: Extending Rocchio with Distributional Term Analysis

    DTIC Science & Technology

    2008-11-01

    starting point is the improved version [ Salton and Buckley 1990] of the original Rocchio’s formula [Rocchio 1971]: newQ = α ⋅ origQ + β R r r∈R ∑ − γR...earlier studies about the low effect of the main relevance feedback parameters on retrieval performance (e.g., Salton and Buckley 1990), while they seem...Relevance feedback in information retrieval. In The SMART retrieval system - experiments in automatic document processing, Salton , G., Ed., Prentice Hall

  6. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  7. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  8. Psychopathy-related traits and the use of reward and social information: a computational approach

    PubMed Central

    Brazil, Inti A.; Hunt, Laurence T.; Bulten, Berend H.; Kessels, Roy P. C.; de Bruijn, Ellen R. A.; Mars, Rogier B.

    2013-01-01

    Psychopathy is often linked to disturbed reinforcement-guided adaptation of behavior in both clinical and non-clinical populations. Recent work suggests that these disturbances might be due to a deficit in actively using information to guide changes in behavior. However, how much information is actually used to guide behavior is difficult to observe directly. Therefore, we used a computational model to estimate the use of information during learning. Thirty-six female subjects were recruited based on their total scores on the Psychopathic Personality Inventory (PPI), a self-report psychopathy list, and performed a task involving simultaneous learning of reward-based and social information. A Bayesian reinforcement-learning model was used to parameterize the use of each source of information during learning. Subsequently, we used the subscales of the PPI to assess psychopathy-related traits, and the traits that were strongly related to the model's parameters were isolated through a formal variable selection procedure. Finally, we assessed how these covaried with model parameters. We succeeded in isolating key personality traits believed to be relevant for psychopathy that can be related to model-based descriptions of subject behavior. Use of reward-history information was negatively related to levels of trait anxiety and fearlessness, whereas use of social advice decreased as the perceived ability to manipulate others and lack of anxiety increased. These results corroborate previous findings suggesting that sub-optimal use of different types of information might be implicated in psychopathy. They also further highlight the importance of considering the potential of computational modeling to understand the role of latent variables, such as the weight people give to various sources of information during goal-directed behavior, when conducting research on psychopathy-related traits and in the field of forensic psychiatry. PMID:24391615

  9. Alternate energy source usage methods for in situ heat treatment processes

    DOEpatents

    Stone, Jr., Francis Marion; Goodwin, Charles R; Richard, Jr., James E

    2014-10-14

    Systems, methods, and heaters for treating a subsurface formation are described herein. At least one method for providing power to one or more subsurface heaters is described herein. The method may include monitoring one or more operating parameters of the heaters, the intermittent power source, and a transformer coupled to the intermittent power source that transforms power from the intermittent power source to power with appropriate operating parameters for the heaters; and controlling the power output of the transformer so that a constant voltage is provided to the heaters regardless of the load of the heaters and the power output provided by the intermittent power source.

  10. Parameter identification of process simulation models as a means for knowledge acquisition and technology transfer

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.; Ifanti, Konstantina

    2012-12-01

    Process simulation models are usually empirical, therefore there is an inherent difficulty in serving as carriers for knowledge acquisition and technology transfer, since their parameters have no physical meaning to facilitate verification of the dependence on the production conditions; in such a case, a 'black box' regression model or a neural network might be used to simply connect input-output characteristics. In several cases, scientific/mechanismic models may be proved valid, in which case parameter identification is required to find out the independent/explanatory variables and parameters, which each parameter depends on. This is a difficult task, since the phenomenological level at which each parameter is defined is different. In this paper, we have developed a methodological framework under the form of an algorithmic procedure to solve this problem. The main parts of this procedure are: (i) stratification of relevant knowledge in discrete layers immediately adjacent to the layer that the initial model under investigation belongs to, (ii) design of the ontology corresponding to these layers, (iii) elimination of the less relevant parts of the ontology by thinning, (iv) retrieval of the stronger interrelations between the remaining nodes within the revised ontological network, and (v) parameter identification taking into account the most influential interrelations revealed in (iv). The functionality of this methodology is demonstrated by quoting two representative case examples on wastewater treatment.

  11. REVIEW OF INDOOR EMISSION SOURCE MODELS: PART 2. PARAMETER ESTIMATION

    EPA Science Inventory

    This review consists of two sections. Part I provides an overview of 46 indoor emission source models. Part 2 (this paper) focuses on parameter estimation, a topic that is critical to modelers but has never been systematically discussed. A perfectly valid model may not be a usefu...

  12. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  13. Pinatubo Emulation in Multiple Models (POEMs): co-ordinated experiments in the ISA-MIP model intercomparison activity component of the SPARC Stratospheric Sulphur and it's Role in Climate initiative (SSiRC)

    NASA Astrophysics Data System (ADS)

    Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina

    2016-04-01

    The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.

  14. Zebrafish as a possible bioindicator of organic pollutants with effects on reproduction in drinking waters.

    PubMed

    Martínez-Sales, M; García-Ximénez, F; Espinós, F J

    2015-07-01

    Organic contaminants can be detected at low concentrations in drinking water, raising concerns for human health, particularly in reproduction. In this respect, we attempted to use the zebrafish as a bioindicator to detect the possible presence of these substances in drinking water, aiming to define the most relevant parameters to detect these substances, which particularly affect the development and reproduction of zebrafish. To this end, batches of 30 embryos with the chorion intact were cultured in drinking waters from different sources, throughout their full life-cycle up to 5 months, in 20 L tanks. Six replicates were performed in all water groups, with a total of 24 aquariums. Two generations (F0 and F1) were studied and the following parameters were tested: in the F0 generation, survival and abnormality rates evaluated at 5 dpf (days post-fertilization) and at 5 mpf (months post-fertilization), the onset of spawning and the fertility rate from 3 mpf to 5 mpf, and the sex ratio and underdeveloped specimens at 5 mpf. Furthermore, in the F0 offspring (F1), survival and abnormality rates were evaluated at 5 dpf and the hatching rate at 72 hpf. These results revealed that the hatching rate is the most sensitive parameter to distinguish different levels of effects between waters during the early life stages, whereas the rate of underdeveloped specimens is more suitable at later life stages. Regarding adult reproduction, fertility rate was the most sensitive parameter. The possible reversibility or accumulative nature of such effects will be studied in future work. Copyright © 2015. Published by Elsevier B.V.

  15. Modified method for estimating petroleum source-rock potential using wireline logs, with application to the Kingak Shale, Alaska North Slope

    USGS Publications Warehouse

    Rouse, William A.; Houseknecht, David W.

    2016-02-11

    In 2012, the U.S. Geological Survey completed an assessment of undiscovered, technically recoverable oil and gas resources in three source rocks of the Alaska North Slope, including the lower part of the Jurassic to Lower Cretaceous Kingak Shale. In order to identify organic shale potential in the absence of a robust geochemical dataset from the lower Kingak Shale, we introduce two quantitative parameters, $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$, estimated from wireline logs from exploration wells and based in part on the commonly used delta-log resistivity ($\\Delta \\text{ }log\\text{ }R$) technique. Calculation of $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ is intended to produce objective parameters that may be proportional to the quality and volume, respectively, of potential source rocks penetrated by a well and can be used as mapping parameters to convey the spatial distribution of source-rock potential. Both the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters show increased source-rock potential from north to south across the North Slope, with the largest values at the toe of clinoforms in the lower Kingak Shale. Because thermal maturity is not considered in the calculation of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$, total organic carbon values for individual wells cannot be calculated on the basis of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$ alone. Therefore, the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters should be viewed as first-step reconnaissance tools for identifying source-rock potential.

  16. Clinico-pathological and biological prognostic variables in squamous cell carcinoma of the vulva.

    PubMed

    Gadducci, Angiolo; Tana, Roberta; Barsotti, Cecilia; Guerrieri, Maria Elena; Genazzani, Andrea Riccardo

    2012-07-01

    Several clinical-pathological parameters have been related to survival of patients with invasive squamous cell carcinoma of the vulva, whereas few studies have investigated the ability of biological variables to predict the clinical outcome of these patients. The present paper reviews the literature data on the prognostic relevance of lymph node-related parameters, primary tumor-related parameters, FIGO stage, blood variables, and tissue biological variables. Regarding these latter, the paper takes into account the analysis of DNA content, cell cycle-regulatory proteins, apoptosis-related proteins, epidermal growth factor receptor [EGFR], and proteins that are involved in tumor invasiveness, metastasis and angiogenesis. At present, the lymph node status and FIGO stage according to the new 2009 classification system are the main predictors for vulvar squamous cell carcinoma, whereas biological variables do not have yet a clinical relevance and their role is still investigational. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Women's Later Life Career Development: Looking through the Lens of the Kaleidoscope Career Model

    ERIC Educational Resources Information Center

    August, Rachel A.

    2011-01-01

    This study explores the relevance of the Kaleidoscope Career Model (KCM) to women's later life career development. Qualitative interview data were gathered from 14 women in both the "truly" late career and bridge employment periods using a longitudinal design. The relevance of authenticity, balance, and challenge--central parameters in the KCM--is…

  18. Relevance and Rigor in International Business Teaching: Using the CSA-FSA Matrix

    ERIC Educational Resources Information Center

    Collinson, Simon C.; Rugman, Alan M.

    2011-01-01

    We advance three propositions in this paper. First, teaching international business (IB) at any level needs to be theoretically driven, using mainstream frameworks to organize thinking. Second, these frameworks need to be made relevant to the experiences of the students; for example, by using them in case studies. Third, these parameters of rigor…

  19. Intraspecific variation in the use of water sources by the circum-Mediterranean conifer Pinus halepensis.

    PubMed

    Voltas, Jordi; Lucabaugh, Devon; Chambel, Maria Regina; Ferrio, Juan Pedro

    2015-12-01

    The relevance of interspecific variation in the use of plant water sources has been recognized in drought-prone environments. By contrast, the characterization of intraspecific differences in water uptake patterns remains elusive, although preferential access to particular soil layers may be an important adaptive response for species along aridity gradients. Stable water isotopes were analysed in soil and xylem samples of 56 populations of the drought-avoidant conifer Pinus halepensis grown in a common garden test. We found that most populations reverted to deep soil layers as the main plant water source during seasonal summer droughts. More specifically, we detected a clear geographical differentiation among populations in water uptake patterns even under relatively mild drought conditions (early autumn), with populations originating from more arid regions taking up more water from deep soil layers. However, the preferential access to deep soil water was largely independent of aboveground growth. Our findings highlight the high plasticity and adaptive relevance of the differential access to soil water pools among Aleppo pine populations. The observed ecotypic patterns point to the adaptive relevance of resource investment in deep roots as a strategy towards securing a source of water in dry environments for P. halepensis. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  20. Systematic literature searching in policy relevant, inter-disciplinary reviews: an example from culture and sport.

    PubMed

    Schucan Bird, Karen; Tripney, Janice

    2011-09-01

    Within the systematic review process, the searching phase is critical to the final synthesis product, its use and value. Yet, relatively little is known about the utility of different search strategies for reviews of complex, inter-disciplinary evidence. This article used a recently completed programme of work on cultural and sporting engagement to conduct an empirical evaluation of a comprehensive search strategy. Ten different types of search source were evaluated, according to three dimensions: (i) effectiveness in identifying relevant studies; (ii) efficiency in identifying studies; and (iii) adding value by locating studies that were not identified by any other sources. The study found that general bibliographic databases and specialist databases ranked the highest on all three dimensions. Overall, websites and journals were the next most valuable types of source. For reviewers, these findings highlight that general and specialist databases should remain a core component of the comprehensive search strategy, supplemented with other types of sources that can efficiently identify unique or grey literature. For policy makers and other research commissioners, this study highlights the value of methodological analysis for improving the understanding of, and practice in, policy relevant, inter-disciplinary systematic reviews. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Anthropological Methods Relevant for Journalists.

    ERIC Educational Resources Information Center

    Bird, S. Elizabeth

    1987-01-01

    Discusses the relevance of the anthropological or ethnographic approach to journalism. Suggests ways that an appreciation of this methodology can help journalism students become more effective and perceptive in their future careers by nudging them out of the commonsense work perspective and requiring greater empathy and involvement with sources.…

  2. The Prevailing Catalytic Role of Meteorites in Formamide Prebiotic Processes.

    PubMed

    Saladino, Raffaele; Botta, Lorenzo; Di Mauro, Ernesto

    2018-02-22

    Meteorites are consensually considered to be involved in the origin of life on this Planet for several functions and at different levels: (i) as providers of impact energy during their passage through the atmosphere; (ii) as agents of geodynamics, intended both as starters of the Earth's tectonics and as activators of local hydrothermal systems upon their fall; (iii) as sources of organic materials, at varying levels of limited complexity; and (iv) as catalysts. The consensus about the relevance of these functions differs. We focus on the catalytic activities of the various types of meteorites in reactions relevant for prebiotic chemistry. Formamide was selected as the chemical precursor and various sources of energy were analyzed. The results show that all the meteorites and all the different energy sources tested actively afford complex mixtures of biologically-relevant compounds, indicating the robustness of the formamide-based prebiotic chemistry involved. Although in some cases the yields of products are quite small, the diversity of the detected compounds of biochemical significance underlines the prebiotic importance of meteorite-catalyzed condensation of formamide.

  3. Accurate estimation of seismic source parameters of induced seismicity by a combined approach of generalized inversion and genetic algorithm: Application to The Geysers geothermal area, California

    NASA Astrophysics Data System (ADS)

    Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.

    2017-05-01

    The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.

  4. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    PubMed

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  5. Monte Carlo calculated TG-60 dosimetry parameters for the {beta}{sup -} emitter {sup 153}Sm brachytherapy source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.

    Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters ofmore » AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.« less

  6. 50 CFR 648.231 - Closures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Dogfish Monitoring Committee shall identify and review the relevant sources of management uncertainty to... management uncertainty that were considered, technical approaches to mitigating these sources of uncertainty..., DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES Management Measures for the Spiny Dogfish...

  7. A summary report on the search for current technologies and developers to develop depth profiling/physical parameter end effectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Q.H.

    1994-09-12

    This report documents the search strategies and results for available technologies and developers to develop tank waste depth profiling/physical parameter sensors. Sources searched include worldwide research reports, technical papers, journals, private industries, and work at Westinghouse Hanford Company (WHC) at Richland site. Tank waste physical parameters of interest are: abrasiveness, compressive strength, corrosiveness, density, pH, particle size/shape, porosity, radiation, settling velocity, shear strength, shear wave velocity, tensile strength, temperature, viscosity, and viscoelasticity. A list of related articles or sources for each physical parameters is provided.

  8. The Exponent of High-frequency Source Spectral Falloff and Contribution to Source Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Kiuchi, R.; Mori, J. J.

    2015-12-01

    As a way to understand the characteristics of the earthquake source, studies of source parameters (such as radiated energy and stress drop) and their scaling are important. In order to estimate source parameters reliably, often we must use appropriate source spectrum models and the omega-square model is most frequently used. In this model, the spectrum is flat in lower frequencies and the falloff is proportional to the angular frequency squared. However, Some studies (e.g. Allmann and Shearer, 2009; Yagi et al., 2012) reported that the exponent of the high frequency falloff is other than -2. Therefore, in this study we estimate the source parameters using a spectral model for which the falloff exponent is not fixed. We analyze the mainshock and larger aftershocks of the 2008 Iwate-Miyagi Nairiku earthquake. Firstly, we calculate the P wave and SH wave spectra using empirical Green functions (EGF) to remove the path effect (such as attenuation) and site effect. For the EGF event, we select a smaller earthquake that is highly-correlated with the target event. In order to obtain the stable results, we calculate the spectral ratios using a multitaper spectrum analysis (Prieto et al., 2009). Then we take a geometric mean from multiple stations. Finally, using the obtained spectra ratios, we perform a grid search to determine the high frequency falloffs, as well as corner frequency of both of events. Our results indicate the high frequency falloff exponent is often less than 2.0. We do not observe any regional, focal mechanism, or depth dependencies for the falloff exponent. In addition, our estimated corner frequencies and falloff exponents are consistent between the P wave and SH wave analysis. In our presentation, we show differences in estimated source parameters using a fixed omega-square model and a model allowing variable high-frequency falloff.

  9. Longitudinal Evaluation of Cornea With Swept-Source Optical Coherence Tomography and Scheimpflug Imaging Before and After Lasik

    PubMed Central

    Chan, Tommy C.Y.; Biswas, Sayantan; Yu, Marco; Jhanji, Vishal

    2015-01-01

    Abstract Swept-source optical coherence tomography (OCT) is the latest advancement in anterior segment imaging. There are limited data regarding its performance after laser in situ keratomileusis (LASIK). We compared the reliability of swept-source OCT and Scheimpflug imaging for evaluation of corneal parameters in refractive surgery candidates with myopia or myopic astigmatism. Three consecutive measurements were obtained preoperatively and 1 year postoperatively using swept-source OCT and Scheimpflug imaging. The study parameters included central corneal thickness (CCT), thinnest corneal thickness (TCT), keratometry at steep (Ks) and flat (Kf) axes, mean keratometry (Km), and, anterior and posterior best fit spheres (Ant and Post BFS). The main outcome measures included reliability of measurements before and after LASIK was evaluated using intraclass correlation coefficient (ICC) and reproducibility coefficients (RC). Association between the mean value of corneal parameters with age, spherical equivalent (SEQ), and residual bed thickness (RBT) and association of variance heterogeneity of corneal parameters and these covariates were analyzed. Twenty-six right eyes of 26 participants (mean age, 32.7 ± 6.9 yrs; mean SEQ, −6.27 ± 1.67 D) were included. Preoperatively, swept-source OCT demonstrated significantly higher ICC for Ks, CCT, TCT, and Post BFS (P ≤ 0.016), compared with Scheimpflug imaging. Swept-source OCT demonstrated significantly smaller RC values for CCT, TCT, and Post BFS (P ≤ 0.001). After LASIK, both devices had significant differences in measurements for all corneal parameters (P ≤ 0.015). Swept-source OCT demonstrated a significantly higher ICC and smaller RC for all measurements, compared with Scheimpflug imaging (P ≤ 0.001). Association of variance heterogeneity was only found in pre-LASIK Ant BFS and post-LASIK Post BFS for swept-source OCT, whereas significant association of variance heterogeneity was noted for all measurements except Ks and Km for Scheimpflug imaging. This study reported higher reliability of swept-source OCT for post-LASIK corneal measurements, as compared with Scheimpflug imaging. The reliability of corneal parameters measured with Scheimpflug imaging after LASIK was not consistent across different age, SEQ, and RBT measurements. These factors need to be considered during follow-up and evaluation of post-LASIK patients for further surgical procedures. PMID:26222852

  10. FRUIT: An operational tool for multisphere neutron spectrometry in workplaces

    NASA Astrophysics Data System (ADS)

    Bedogni, Roberto; Domingo, Carles; Esposito, Adolfo; Fernández, Francisco

    2007-10-01

    FRUIT (Frascati Unfolding Interactive Tool) is an unfolding code for Bonner sphere spectrometers (BSS) developed, under the Labview environment, at the INFN-Frascati National Laboratory. It models a generic neutron spectrum as the superposition of up to four components (thermal, epithermal, fast and high energy), fully defined by up to seven positive parameters. Different physical models are available to unfold the sphere counts, covering the majority of the neutron spectra encountered in workplaces. The iterative algorithm uses Monte Carlo methods to vary the parameters and derive the final spectrum as limit of a succession of spectra fulfilling the established convergence criteria. Uncertainties on the final results are evaluated taking into consideration the different sources of uncertainty affecting the input data. Relevant features of FRUIT are (1) a high level of interactivity, allowing the user to follow the convergence process, (2) the possibility to modify the convergence tolerances during the run, allowing a rapid achievement of meaningful solutions and (3) the reduced dependence of the results from the initial hypothesis. This provides a useful instrument for spectrometric measurements in workplaces, where detailed a priori information is usually unavailable. This paper describes the characteristics of the code and presents the results of performance tests over a significant variety of reference and workplace neutron spectra ranging from thermal up to hundreds MeV neutrons.

  11. Implementation of two-component advective flow solution in XSPEC

    NASA Astrophysics Data System (ADS)

    Debnath, Dipak; Chakrabarti, Sandip K.; Mondal, Santanu

    2014-05-01

    Spectral and temporal properties of black hole candidates can be explained reasonably well using Chakrabarti-Titarchuk solution of two-component advective flow (TCAF). This model requires two accretion rates, namely the Keplerian disc accretion rate and the halo accretion rate, the latter being composed of a sub-Keplerian, low-angular-momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disc rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time, we made it user friendly by implementing it into XSPEC software of Goddard Space Flight Center (GSFC)/NASA. This enables any user to extract physical parameters of the accretion flows, such as two accretion rates, the shock location, the shock strength, etc., for any black hole candidate. We provide some examples of fitting a few cases using this model. Most importantly, unlike any other model, we show that TCAF is capable of predicting timing properties from the spectral fits, since in TCAF, a shock is responsible for deciding spectral slopes as well as quasi-periodic oscillation frequencies. L86

  12. Perceptually relevant parameters for virtual listening simulation of small room acoustics

    PubMed Central

    Zahorik, Pavel

    2009-01-01

    Various physical aspects of room-acoustic simulation techniques have been extensively studied and refined, yet the perceptual attributes of the simulations have received relatively little attention. Here a method of evaluating the perceptual similarity between rooms is described and tested using 15 small-room simulations based on binaural room impulse responses (BRIRs) either measured from a real room or estimated using simple geometrical acoustic modeling techniques. Room size and surface absorption properties were varied, along with aspects of the virtual simulation including the use of individualized head-related transfer function (HRTF) measurements for spatial rendering. Although differences between BRIRs were evident in a variety of physical parameters, a multidimensional scaling analysis revealed that when at-the-ear signal levels were held constant, the rooms differed along just two perceptual dimensions: one related to reverberation time (T60) and one related to interaural coherence (IACC). Modeled rooms were found to differ from measured rooms in this perceptual space, but the differences were relatively small and should be easily correctable through adjustment of T60 and IACC in the model outputs. Results further suggest that spatial rendering using individualized HRTFs offers little benefit over nonindividualized HRTF rendering for room simulation applications where source direction is fixed. PMID:19640043

  13. Practice parameters facilitating adoption of advanced technologies for enhancing neuropsychological assessment paradigms.

    PubMed

    Parsons, Thomas D; McMahan, Timothy; Kane, Robert

    2018-01-01

    Clinical neuropsychologists have long underutilized computer technologies for neuropsychological assessment. Given the rapid advances in technology (e.g. virtual reality; tablets; iPhones) and the increased accessibility in the past decade, there is an on-going need to identify optimal specifications for advanced technologies while minimizing potential sources of error. Herein, we discuss concerns raised by a joint American Academy of Clinical Neuropsychology/National Academy of Neuropsychology position paper. Moreover, we proffer parameters for the development and use of advanced technologies in neuropsychological assessments. We aim to first describe software and hardware configurations that can impact a computerized neuropsychological assessment. This is followed by a description of best practices for developers and practicing neuropsychologists to minimize error in neuropsychological assessments using advanced technologies. We also discuss the relevance of weighing potential computer error in light of possible errors associated with traditional testing. Throughout there is an emphasis on the need for developers to provide bench test results for their software's performance on various devices and minimum specifications (documented in manuals) for the hardware (e.g. computer, monitor, input devices) in the neuropsychologist's practice. Advances in computerized assessment platforms offer both opportunities and challenges. The challenges can appear daunting but are a manageable and require informed consumers who can appreciate the issues and ask pertinent questions in evaluating their options.

  14. Translating Extreme Precipitation Data from Climate Change Projections into Resilient Engineering Applications

    NASA Astrophysics Data System (ADS)

    Cook, L. M.; Samaras, C.; Anderson, C.

    2016-12-01

    Engineers generally use historical precipitation trends to inform assumptions and parameters for long-lived infrastructure designs. However, resilient design calls for the adjustment of current engineering practice to incorporate a range of future climate conditions that are likely to be different than the past. Despite the availability of future projections from downscaled climate models, there remains a considerable mismatch between climate model outputs and the inputs needed in the engineering community to incorporate climate resiliency. These factors include differences in temporal and spatial scales, model uncertainties, and a lack of criteria for selection of an ensemble of models. This research addresses the limitations to working with climate data by providing a framework for the use of publicly available downscaled climate projections to inform engineering resiliency. The framework consists of five steps: 1) selecting the data source based on the engineering application, 2) extracting the data at a specific location, 3) validating for performance against observed data, 4) post-processing for bias or scale, and 5) selecting the ensemble and calculating statistics. The framework is illustrated with an example application to extreme precipitation-frequency statistics, the 25-year daily precipitation depth, using four publically available climate data sources: NARCCAP, USGS, Reclamation, and MACA. The attached figure presents the results for step 5 from the framework, analyzing how the 24H25Y depth changes when the model ensemble is culled based on model performance against observed data, for both post-processing techniques: bias-correction and change factor. Culling the model ensemble increases both the mean and median values for all data sources, and reduces range for NARCCAP and MACA ensembles due to elimination of poorer performing models, and in some cases, those that predict a decrease in future 24H25Y precipitation volumes. This result is especially relevant to engineers who wish to reduce the range of the ensemble and remove contradicting models; however, this result is not generalizable for all cases. Finally, this research highlights the need for the formation of an intermediate entity that is able to translate climate projections into relevant engineering information.

  15. Does a research article's country of origin affect perception of its quality and relevance? A national trial of US public health researchers.

    PubMed

    Harris, M; Macinko, J; Jimenez, G; Mahfoud, M; Anderson, C

    2015-12-30

    The source of research may influence one's interpretation of it in either negative or positive ways, however, there are no robust experiments to determine how source impacts on one's judgment of the research article. We determine the impact of source on respondents' assessment of the quality and relevance of selected research abstracts. Web-based survey design using four healthcare research abstracts previously published and included in Cochrane Reviews. All Council on the Education of Public Health-accredited Schools and Programmes of Public Health in the USA. 899 core faculty members (full, associate and assistant professors) Each of the four abstracts appeared with a high-income source half of the time, and low-income source half of the time. Participants each reviewed the same four abstracts, but were randomly allocated to receive two abstracts with high-income source, and two abstracts with low-income source, allowing for within-abstract comparison of quality and relevance Within-abstract comparison of participants' rating scores on two measures--strength of the evidence, and likelihood of referral to a peer (1-10 rating scale). OR was calculated using a generalised ordered logit model adjusting for sociodemographic covariates. Participants who received high income country source abstracts were equal in all known characteristics to the participants who received the abstracts with low income country sources. For one of the four abstracts (a randomised, controlled trial of a pharmaceutical intervention), likelihood of referral to a peer was greater if the source was a high income country (OR 1.28, 1.02 to 1.62, p<0.05). All things being equal, in one of the four abstracts, the respondents were influenced by a high-income source in their rating of research abstracts. More research may be needed to explore how the origin of a research article may lead to stereotype activation and application in research evaluation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Correlation between mass transfer coefficient kLa and relevant operating parameters in cylindrical disposable shaken bioreactors on a bench-to-pilot scale

    PubMed Central

    2013-01-01

    Background Among disposable bioreactor systems, cylindrical orbitally shaken bioreactors show important advantages. They provide a well-defined hydrodynamic flow combined with excellent mixing and oxygen transfer for mammalian and plant cell cultivations. Since there is no known universal correlation between the volumetric mass transfer coefficient for oxygen kLa and relevant operating parameters in such bioreactor systems, the aim of this current study is to experimentally determine a universal kLa correlation. Results A Respiration Activity Monitoring System (RAMOS) was used to measure kLa values in cylindrical disposable shaken bioreactors and Buckingham’s π-Theorem was applied to define a dimensionless equation for kLa. In this way, a scale- and volume-independent kLa correlation was developed and validated in bioreactors with volumes from 2 L to 200 L. The final correlation was used to calculate cultivation parameters at different scales to allow a sufficient oxygen supply of tobacco BY-2 cell suspension cultures. Conclusion The resulting equation can be universally applied to calculate the mass transfer coefficient for any of seven relevant cultivation parameters such as the reactor diameter, the shaking frequency, the filling volume, the viscosity, the oxygen diffusion coefficient, the gravitational acceleration or the shaking diameter within an accuracy range of +/− 30%. To our knowledge, this is the first kLa correlation that has been defined and validated for the cited bioreactor system on a bench-to-pilot scale. PMID:24289110

  17. Correlation between mass transfer coefficient kLa and relevant operating parameters in cylindrical disposable shaken bioreactors on a bench-to-pilot scale.

    PubMed

    Klöckner, Wolf; Gacem, Riad; Anderlei, Tibor; Raven, Nicole; Schillberg, Stefan; Lattermann, Clemens; Büchs, Jochen

    2013-12-02

    Among disposable bioreactor systems, cylindrical orbitally shaken bioreactors show important advantages. They provide a well-defined hydrodynamic flow combined with excellent mixing and oxygen transfer for mammalian and plant cell cultivations. Since there is no known universal correlation between the volumetric mass transfer coefficient for oxygen kLa and relevant operating parameters in such bioreactor systems, the aim of this current study is to experimentally determine a universal kLa correlation. A Respiration Activity Monitoring System (RAMOS) was used to measure kLa values in cylindrical disposable shaken bioreactors and Buckingham's π-Theorem was applied to define a dimensionless equation for kLa. In this way, a scale- and volume-independent kLa correlation was developed and validated in bioreactors with volumes from 2 L to 200 L. The final correlation was used to calculate cultivation parameters at different scales to allow a sufficient oxygen supply of tobacco BY-2 cell suspension cultures. The resulting equation can be universally applied to calculate the mass transfer coefficient for any of seven relevant cultivation parameters such as the reactor diameter, the shaking frequency, the filling volume, the viscosity, the oxygen diffusion coefficient, the gravitational acceleration or the shaking diameter within an accuracy range of +/- 30%. To our knowledge, this is the first kLa correlation that has been defined and validated for the cited bioreactor system on a bench-to-pilot scale.

  18. Outcome quality of in-patient cardiac rehabilitation in elderly patients--identification of relevant parameters.

    PubMed

    Salzwedel, Annett; Nosper, Manfred; Röhrig, Bernd; Linck-Eleftheriadis, Sigrid; Strandt, Gert; Völler, Heinz

    2014-02-01

    Outcome quality management requires the consecutive registration of defined variables. The aim was to identify relevant parameters in order to objectively assess the in-patient rehabilitation outcome. From February 2009 to June 2010 1253 patients (70.9 ± 7.0 years, 78.1% men) at 12 rehabilitation clinics were enrolled. Items concerning sociodemographic data, the impairment group (surgery, conservative/interventional treatment), cardiovascular risk factors, structural and functional parameters and subjective health were tested in respect of their measurability, sensitivity to change and their propensity to be influenced by rehabilitation. The majority of patients (61.1%) were referred for rehabilitation after cardiac surgery, 38.9% after conservative or interventional treatment for an acute coronary syndrome. Functionally relevant comorbidities were seen in 49.2% (diabetes mellitus, stroke, peripheral artery disease, chronic obstructive lung disease). In three key areas 13 parameters were identified as being sensitive to change and subject to modification by rehabilitation: cardiovascular risk factors (blood pressure, low-density lipoprotein cholesterol, triglycerides), exercise capacity (resting heart rate, maximal exercise capacity, maximal walking distance, heart failure, angina pectoris) and subjective health (IRES-24 (indicators of rehabilitation status): pain, somatic health, psychological well-being and depression as well as anxiety on the Hospital Anxiety and Depression Scale). The outcome of in-patient rehabilitation in elderly patients can be comprehensively assessed by the identification of appropriate key areas, that is, cardiovascular risk factors, exercise capacity and subjective health. This may well serve as a benchmark for internal and external quality management.

  19. Progress in Mirror-Based Fusion Neutron Source Development.

    PubMed

    Anikeev, A V; Bagryansky, P A; Beklemishev, A D; Ivanov, A A; Kolesnikov, E Yu; Korzhavina, M S; Korobeinikova, O A; Lizunov, A A; Maximov, V V; Murakhtin, S V; Pinzhenin, E I; Prikhodko, V V; Soldatkina, E I; Solomakhin, A L; Tsidulko, Yu A; Yakovlev, D V; Yurov, D V

    2015-12-04

    The Budker Institute of Nuclear Physics in worldwide collaboration has developed a project of a 14 MeV neutron source for fusion material studies and other applications. The projected neutron source of the plasma type is based on the gas dynamic trap (GDT), which is a special magnetic mirror system for plasma confinement. Essential progress in plasma parameters has been achieved in recent experiments at the GDT facility in the Budker Institute, which is a hydrogen (deuterium) prototype of the source. Stable confinement of hot-ion plasmas with the relative pressure exceeding 0.5 was demonstrated. The electron temperature was increased up to 0.9 keV in the regime with additional electron cyclotron resonance heating (ECRH) of a moderate power. These parameters are the record for axisymmetric open mirror traps. These achievements elevate the projects of a GDT-based neutron source on a higher level of competitive ability and make it possible to construct a source with parameters suitable for materials testing today. The paper presents the progress in experimental studies and numerical simulations of the mirror-based fusion neutron source and its possible applications including a fusion material test facility and a fusion-fission hybrid system.

  20. Final Report (OO-ERD-056) MEDIOS: Modeling Earth Deformation Using Interferometric Observations from Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincent, P; Walter, B; Zucca, J

    2002-01-29

    This final report summarizes the accomplishments of the 2-year LDRD-ER project ''MEDIOS: Modeling Earth Deformation using Interferometric Observations from Space'' (00-ERD-056) which began in FY00 and ended in FY01. The structure of this report consists of this summary part plus two separate journal papers, each having their own UCRL number, which document in more detail the major results in two (of three) major categories of this study. The two categories and their corresponding paper titles are (1) Seismic Hazard Mitigation (''Aseismic Creep Events along the Southern San Andreas Fault System''), and (2) Ground-based Nuclear Explosion Monitoring, or GNEM (''New Signaturesmore » of Underground Nuclear Tests Revealed by Satellite Radar Interferometry''). The third category is Energy Exploitation Applications and does not have a separate journal article associated with it but is described briefly. The purpose of this project was to develop a capability within the Geophysics and Global Security Division to process and analyze InSAR data for the purposes of constructing more accurate ground deformation source models relevant to Hazards, Energy, and NAI applications. Once this was accomplished, an inversion tool was to be created that could be applied to many different types (sources) of surface deformation so that accurate source parameters could be determined for a variety of subsurface processes of interest to customers of the GGS Division. This new capability was desired to help attract new project funding for the division.« less

  1. Development of cost-effective media to increase the economic potential for larger-scale bioproduction of natural food additives by Lactobacillus rhamnosus , Debaryomyces hansenii , and Aspergillus niger.

    PubMed

    Salgado, José Manuel; Rodríguez, Noelia; Cortés, Sandra; Domínguez, José Manuel

    2009-11-11

    Yeast extract (YE) is the most common nitrogen source in a variety of bioprocesses in spite of the high cost. Therefore, the use of YE in culture media is one of the major technical hurdles to be overcome for the development of low-cost fermentation routes, making the search for alternative-cheaper nitrogen sources particularly desired. The aim of the current study is to develop cost-effective media based on corn steep liquor (CSL) and locally available vinasses in order to increase the economic potential for larger-scale bioproduction. Three microorganisms were evaluated: Lactobacillus rhamnosus , Debaryomyces hansenii , and Aspergillus niger . The amino acid profile and protein concentration was relevant for the xylitol and citric acid production by D. hansenii and A. niger , respectively. Metals also played an important role for citric acid production, meanwhile, D. hansenii showed a strong dependence with the initial amount of Mg(2+). Under the best conditions, 28.8 g lactic acid/L (Q(LA) = 0.800 g/L.h, Y(LA/S) = 0.95 g/g), 35.3 g xylitol/L (Q(xylitol) = 0.380 g/L.h, Y(xylitol/S) = 0.69 g/g), and 13.9 g citric acid/L (Q(CA) = 0.146 g/L.h, Y(CA/S) = 0.63 g/g) were obtained. The economic efficiency (E(p/euro)) parameter identify vinasses as a lower cost and more effective nutrient source in comparison to CSL.

  2. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  3. Search for sterile neutrinos in gallium experiments with artificial neutrino sources

    NASA Astrophysics Data System (ADS)

    Gavrin, V. N.; Cleveland, B. T.; Gorbachev, V. V.; Ibragimova, T. V.; Kalikhov, A. V.; Kozlova, Yu. P.; Mirmov, I. N.; Shikhin, A. A.; Veretenkin, E. P.

    2017-11-01

    The possibility of the BEST experiment on electron neutrino disappearance with intense artificial sources of electron neutrino 51Cr is considered. BEST has the great potential to search for transitions of active neutrinos to sterile states with Δ m 2 ˜ 1 eV2 and to set the limits on short baseline electron neutrino disappearance oscillation parameters. The possibility of the further constraints the oscillation parameters region with using 65Zn source is discussed.

  4. Effect of Different Solar Radiation Data Sources on the Variation of Techno-Economic Feasibility of PV Power System

    NASA Astrophysics Data System (ADS)

    Alghoul, M. A.; Ali, Amer; Kannanaikal, F. V.; Amin, N.; Aljaafar, A. A.; Kadhim, Mohammed; Sopian, K.

    2017-11-01

    The aim of this study is to evaluate the variation in techno-economic feasibility of PV power system under different data sources of solar radiation. HOMER simulation tool is used to predict the techno-economic feasibility parameters of PV power system in Baghdad city, Iraq located at (33.3128° N, 44.3615° E) as a case study. Four data sources of solar radiation, different annual capacity shortages percentage (0, 2.5, 5, and 7.5), and wide range of daily load profile (10-100 kWh/day) are implemented. The analyzed parameters of the techno-economic feasibility are COE (/kWh), PV array power capacity (kW), PV electrical production (kWh/year), No. of batteries and battery lifetime (year). The main results of the study revealed the followings: (1) solar radiation from different data sources caused observed to significant variation in the values of the techno-economic feasibility parameters; therefore, careful attention must be paid to ensure the use of an accurate solar input data; (2) Average solar radiation from different data sources can be recommended as a reasonable input data; (3) it is observed that as the size and of PV power system increases, the effect of different data sources of solar radiation increases and causes significant variation in the values of the techno-economic feasibility parameters.

  5. Hydrogen isotopes transport parameters in fusion reactor materials

    NASA Astrophysics Data System (ADS)

    Serra, E.; Benamati, G.; Ogorodnikova, O. V.

    1998-06-01

    This work presents a review of hydrogen isotopes-materials interactions in various materials of interest for fusion reactors. The relevant parameters cover mainly diffusivity, solubility, trap concentration and energy difference between trap and solution sites. The list of materials includes the martensitic steels (MANET, Batman and F82H-mod.), beryllium, aluminium, beryllium oxide, aluminium oxide, copper, tungsten and molybdenum. Some experimental work on the parameters that describe the surface effects is also mentioned.

  6. Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment

    DTIC Science & Technology

    2013-10-01

    presented below. Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in...multiple parameters .  Most samples have been mechanically tested and data extracted for multiple parameters .  Histological evaluation of subset of...Sumner, D. R. Saline Irrigation Does Not Affect Bone Formation or Fixation Strength of Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model

  7. Infrared line parameters at low temperatures relevant to planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Varanasi, Prasad

    1990-01-01

    Employing the techniques that were described in several publications for measuring infrared lineshifts, linewidths and line intensities with a tunable diode laser, these parameters were measures for lines in the important infrared bands of several molecules of interest to the planetary astronomer at low temperatures that are relevant to planetary atmospheres using He, Ne, Ar, H2, N2, O2, and air as the perturbers. In addition to obtaining the many original data on the temperature dependence of the intensities and linewidths, it was also the first measurement of the same for the collision-induced lineshift of an infrared line and it showed that it was markedly different from that of the corresponding collision-broadened linewidth.

  8. Broadening sources of Diginity and Affirmation in Work and Relationship

    PubMed Central

    Byars-Winston, Angela

    2012-01-01

    This article builds on assertions in Richardson’s (2012, this issue) Major Contribution on counseling for work and relationship. In this reaction, I expand on the relevance and potential of the counseling for work and relationship perspective to enrich the field of counseling psychology. My comments focus on three considerations to further extend the cultural relevance of Richardson’s work and relationship perspective: (1) broadening sources of dignity, (2) centering knowledge of marginalized communities, and (3) promoting psychologists’ critical consciousness. Richardson’s perspective holds great promise for being a guiding heuristic to inform counseling psychology research, theory, and practice. PMID:22563131

  9. Fecal Pollution of Water

    EPA Science Inventory

    Fecal pollution of water from a health point of view is the contamination of water with disease-causing organisms (pathogens) that may inhabit the gastrointestinal tract of mammals, but with particular attention to human fecal sources as the most relevant source of human illnesse...

  10. Fecal Pollution of Water.

    EPA Science Inventory

    Fecal pollution of water from a health point of view is the contamination of water with disease-causing organisms (pathogens) that may inhabit the gastrointestinal tract of mammals, but with particular attention to human fecal sources as the most relevant source of human illnesse...

  11. The Cancer Genome Atlas Clinical Explorer: a web and mobile interface for identifying clinical-genomic driver associations.

    PubMed

    Lee, HoJoon; Palm, Jennifer; Grimes, Susan M; Ji, Hanlee P

    2015-10-27

    The Cancer Genome Atlas (TCGA) project has generated genomic data sets covering over 20 malignancies. These data provide valuable insights into the underlying genetic and genomic basis of cancer. However, exploring the relationship among TCGA genomic results and clinical phenotype remains a challenge, particularly for individuals lacking formal bioinformatics training. Overcoming this hurdle is an important step toward the wider clinical translation of cancer genomic/proteomic data and implementation of precision cancer medicine. Several websites such as the cBio portal or University of California Santa Cruz genome browser make TCGA data accessible but lack interactive features for querying clinically relevant phenotypic associations with cancer drivers. To enable exploration of the clinical-genomic driver associations from TCGA data, we developed the Cancer Genome Atlas Clinical Explorer. The Cancer Genome Atlas Clinical Explorer interface provides a straightforward platform to query TCGA data using one of the following methods: (1) searching for clinically relevant genes, micro RNAs, and proteins by name, cancer types, or clinical parameters; (2) searching for genomic/proteomic profile changes by clinical parameters in a cancer type; or (3) testing two-hit hypotheses. SQL queries run in the background and results are displayed on our portal in an easy-to-navigate interface according to user's input. To derive these associations, we relied on elastic-net estimates of optimal multiple linear regularized regression and clinical parameters in the space of multiple genomic/proteomic features provided by TCGA data. Moreover, we identified and ranked gene/micro RNA/protein predictors of each clinical parameter for each cancer. The robustness of the results was estimated by bootstrapping. Overall, we identify associations of potential clinical relevance among genes/micro RNAs/proteins using our statistical analysis from 25 cancer types and 18 clinical parameters that include clinical stage or smoking history. The Cancer Genome Atlas Clinical Explorer enables the cancer research community and others to explore clinically relevant associations inferred from TCGA data. With its accessible web and mobile interface, users can examine queries and test hypothesis regarding genomic/proteomic alterations across a broad spectrum of malignancies.

  12. Formation of manganese nanoclusters in a sputtering/aggregation source and the roles of individual operating parameters

    NASA Astrophysics Data System (ADS)

    Khojasteh, Malak; Kresin, Vitaly V.

    2016-12-01

    We describe the production of size selected manganese nanoclusters using a dc magnetron sputtering/aggregation source. Since nanoparticle production is sensitive to a range of overlapping operating parameters (in particular, the sputtering discharge power, the inert gas flow rates, and the aggregation length) we focus on a detailed map of the influence of each parameter on the average nanocluster size. In this way it is possible to identify the main contribution of each parameter to the physical processes taking place within the source. The discharge power and argon flow supply the atomic vapor, and argon also plays the crucial role in the formation of condensation nuclei via three-body collisions. However, neither the argon flow nor the discharge power have a strong effect on the average nanocluster size in the exiting beam. Here the defining role is played by the source residence time, which is governed by the helium supply and the aggregation path length. The size of mass selected nanoclusters was verified by atomic force microscopy of deposited particles.

  13. Noise disturbance in open-plan study environments: a field study on noise sources, student tasks and room acoustic parameters.

    PubMed

    Braat-Eggen, P Ella; van Heijst, Anne; Hornikx, Maarten; Kohlrausch, Armin

    2017-09-01

    The aim of this study is to gain more insight in the assessment of noise in open-plan study environments and to reveal correlations between noise disturbance experienced by students and the noise sources they perceive, the tasks they perform and the acoustic parameters of the open-plan study environment they work in. Data were collected in five open-plan study environments at universities in the Netherlands. A questionnaire was used to investigate student tasks, perceived sound sources and their perceived disturbance, and sound measurements were performed to determine the room acoustic parameters. This study shows that 38% of the surveyed students are disturbed by background noise in an open-plan study environment. Students are mostly disturbed by speech when performing complex cognitive tasks like studying for an exam, reading and writing. Significant but weak correlations were found between the room acoustic parameters and noise disturbance of students. Practitioner Summary: A field study was conducted to gain more insight in the assessment of noise in open-plan study environments at universities in the Netherlands. More than one third of the students was disturbed by noise. An interaction effect was found for task type, source type and room acoustic parameters.

  14. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  15. Assessing and reporting uncertainties in dietary exposure analysis - Part II: Application of the uncertainty template to a practical example of exposure assessment.

    PubMed

    Tennant, David; Bánáti, Diána; Kennedy, Marc; König, Jürgen; O'Mahony, Cian; Kettler, Susanne

    2017-11-01

    A previous publication described methods for assessing and reporting uncertainty in dietary exposure assessments. This follow-up publication uses a case study to develop proposals for representing and communicating uncertainty to risk managers. The food ingredient aspartame is used as the case study in a simple deterministic model (the EFSA FAIM template) and with more sophisticated probabilistic exposure assessment software (FACET). Parameter and model uncertainties are identified for each modelling approach and tabulated. The relative importance of each source of uncertainty is then evaluated using a semi-quantitative scale and the results expressed using two different forms of graphical summary. The value of this approach in expressing uncertainties in a manner that is relevant to the exposure assessment and useful to risk managers is then discussed. It was observed that the majority of uncertainties are often associated with data sources rather than the model itself. However, differences in modelling methods can have the greatest impact on uncertainties overall, particularly when the underlying data are the same. It was concluded that improved methods for communicating uncertainties for risk management is the research area where the greatest amount of effort is suggested to be placed in future. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Biophysical and physiological origins of blood oxygenation level-dependent fMRI signals.

    PubMed

    Kim, Seong-Gi; Ogawa, Seiji

    2012-07-01

    After its discovery in 1990, blood oxygenation level-dependent (BOLD) contrast in functional magnetic resonance imaging (fMRI) has been widely used to map brain activation in humans and animals. Since fMRI relies on signal changes induced by neural activity, its signal source can be complex and is also dependent on imaging parameters and techniques. In this review, we identify and describe the origins of BOLD fMRI signals, including the topics of (1) effects of spin density, volume fraction, inflow, perfusion, and susceptibility as potential contributors to BOLD fMRI, (2) intravascular and extravascular contributions to conventional gradient-echo and spin-echo BOLD fMRI, (3) spatial specificity of hemodynamic-based fMRI related to vascular architecture and intrinsic hemodynamic responses, (4) BOLD signal contributions from functional changes in cerebral blood flow (CBF), cerebral blood volume (CBV), and cerebral metabolic rate of O(2) utilization (CMRO(2)), (5) dynamic responses of BOLD, CBF, CMRO(2), and arterial and venous CBV, (6) potential sources of initial BOLD dips, poststimulus BOLD undershoots, and prolonged negative BOLD fMRI signals, (7) dependence of stimulus-evoked BOLD signals on baseline physiology, and (8) basis of resting-state BOLD fluctuations. These discussions are highly relevant to interpreting BOLD fMRI signals as physiological means.

  17. Progress of the ELISE test facility: towards one hour pulses in hydrogen

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Fantz, U.; Heinemann, B.; Kraus, W.; Riedl, R.; Wimmer, C.; the NNBI Team

    2016-10-01

    In order to fulfil the ITER requirements, the negative hydrogen ion source used for NBI has to deliver a high source performance, i.e. a high extracted negative ion current and simultaneously a low co-extracted electron current over a pulse length up to 1 h. Negative ions will be generated by the surface process in a low-temperature low-pressure hydrogen or deuterium plasma. Therefore, a certain amount of caesium has to be deposited on the plasma grid in order to obtain a low surface work function and consequently a high negative ion production yield. This caesium is re-distributed by the influence of the plasma, resulting in temporal instabilities of the extracted negative ion current and the co-extracted electrons over long pulses. This paper describes experiments performed in hydrogen operation at the half-ITER-size NNBI test facility ELISE in order to develop a caesium conditioning technique for more stable long pulses at an ITER relevant filling pressure of 0.3 Pa. A significant improvement of the long pulse stability is achieved. Together with different plasma diagnostics it is demonstrated that this improvement is correlated to the interplay of very small variations of parameters like the electrostatic potential and the particle densities close to the extraction system.

  18. Application of the DG-1199 methodology to the ESBWR and ABWR.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinich, Donald A.; Gauntt, Randall O.; Walton, Fotini

    2010-09-01

    Appendix A-5 of Draft Regulatory Guide DG-1199 'Alternative Radiological Source Term for Evaluating Design Basis Accidents at Nuclear Power Reactors' provides guidance - applicable to RADTRAD MSIV leakage models - for scaling containment aerosol concentration to the expected steam dome concentration in order to preserve the simplified use of the Accident Source Term (AST) in assessing containment performance under assumed design basis accident (DBA) conditions. In this study Economic and Safe Boiling Water Reactor (ESBWR) and Advanced Boiling Water Reactor (ABWR) RADTRAD models are developed using the DG-1199, Appendix A-5 guidance. The models were run using RADTRAD v3.03. Low Populationmore » Zone (LPZ), control room (CR), and worst-case 2-hr Exclusion Area Boundary (EAB) doses were calculated and compared to the relevant accident dose criteria in 10 CFR 50.67. For the ESBWR, the dose results were all lower than the MSIV leakage doses calculated by General Electric/Hitachi (GEH) in their licensing technical report. There are no comparable ABWR MSIV leakage doses, however, it should be noted that the ABWR doses are lower than the ESBWR doses. In addition, sensitivity cases were evaluated to ascertain the influence/importance of key input parameters/features of the models.« less

  19. Biophysical and physiological origins of blood oxygenation level-dependent fMRI signals

    PubMed Central

    Kim, Seong-Gi; Ogawa, Seiji

    2012-01-01

    After its discovery in 1990, blood oxygenation level-dependent (BOLD) contrast in functional magnetic resonance imaging (fMRI) has been widely used to map brain activation in humans and animals. Since fMRI relies on signal changes induced by neural activity, its signal source can be complex and is also dependent on imaging parameters and techniques. In this review, we identify and describe the origins of BOLD fMRI signals, including the topics of (1) effects of spin density, volume fraction, inflow, perfusion, and susceptibility as potential contributors to BOLD fMRI, (2) intravascular and extravascular contributions to conventional gradient-echo and spin-echo BOLD fMRI, (3) spatial specificity of hemodynamic-based fMRI related to vascular architecture and intrinsic hemodynamic responses, (4) BOLD signal contributions from functional changes in cerebral blood flow (CBF), cerebral blood volume (CBV), and cerebral metabolic rate of O2 utilization (CMRO2), (5) dynamic responses of BOLD, CBF, CMRO2, and arterial and venous CBV, (6) potential sources of initial BOLD dips, poststimulus BOLD undershoots, and prolonged negative BOLD fMRI signals, (7) dependence of stimulus-evoked BOLD signals on baseline physiology, and (8) basis of resting-state BOLD fluctuations. These discussions are highly relevant to interpreting BOLD fMRI signals as physiological means. PMID:22395207

  20. Osmosis-Based Pressure Generation: Dynamics and Application

    PubMed Central

    Li, Suyi; Billeh, Yazan N.; Wang, K. W.; Mayer, Michael

    2014-01-01

    This paper describes osmotically-driven pressure generation in a membrane-bound compartment while taking into account volume expansion, solute dilution, surface area to volume ratio, membrane hydraulic permeability, and changes in osmotic gradient, bulk modulus, and degree of membrane fouling. The emphasis lies on the dynamics of pressure generation; these dynamics have not previously been described in detail. Experimental results are compared to and supported by numerical simulations, which we make accessible as an open source tool. This approach reveals unintuitive results about the quantitative dependence of the speed of pressure generation on the relevant and interdependent parameters that will be encountered in most osmotically-driven pressure generators. For instance, restricting the volume expansion of a compartment allows it to generate its first 5 kPa of pressure seven times faster than without a restraint. In addition, this dynamics study shows that plants are near-ideal osmotic pressure generators, as they are composed of many small compartments with large surface area to volume ratios and strong cell wall reinforcements. Finally, we demonstrate two applications of an osmosis-based pressure generator: actuation of a soft robot and continuous volume delivery over long periods of time. Both applications do not need an external power source but rather take advantage of the energy released upon watering the pressure generators. PMID:24614529

Top