Sample records for empirical line method

  1. Feasibility of quasi-random band model in evaluating atmospheric radiance

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Mirakhur, N.

    1980-01-01

    The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.

  2. The integral line-beam method for gamma skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Bassett, M.S.

    1991-03-01

    This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less

  3. Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.

    PubMed

    Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi

    2015-09-01

    A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.

  4. Skyshine line-beam response functions for 20- to 100-MeV photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockhoff, R.C.; Shultis, J.K.; Faw, R.E.

    1996-06-01

    The line-beam response function, needed for skyshine analyses based on the integral line-beam method, was evaluated with the MCNP Monte Carlo code for photon energies from 20 to 100 MeV and for source-to-detector distances out to 1,000 m. These results are compared with point-kernel results, and the effects of bremsstrahlung and positron transport in the air are found to be important in this energy range. The three-parameter empirical formula used in the integral line-beam skyshine method was fit to the MCNP results, and values of these parameters are reported for various source energies and angles.

  5. Systematic review: third-line susceptibility-guided treatment for Helicobacter pylori infection

    PubMed Central

    Puig, Ignasi; López-Góngora, Sheila; Calvet, Xavier; Villoria, Albert; Baylina, Mireia; Sanchez-Delgado, Jordi; Suarez, David; García-Hernando, Victor; Gisbert, Javier P.

    2015-01-01

    Background: Susceptibility-guided therapies (SGTs) have been proposed as preferable to empirical rescue treatments after two treatment failures. The aim of this study was to perform a systematic review and meta-analysis evaluating the effectiveness and efficacy of SGT as third-line therapy. Methods: A systematic search was performed in multiple databases. Studies reporting cure rates of Helicobacter pylori with SGT in third-line therapy were selected. A qualitative analysis describing the current evidence and a pooled mean analysis summarizing the cure rates of SGT in third-line therapy was performed. Results: No randomized controlled trials or comparative studies were found. Four observational studies reported cure rates with SGT in third-line treatment, and three studies which mixed patients with second- and third-line treatment also reported cure rates with SGT. The majority of the studies included the patients when culture had been already obtained, and so the effectiveness of SGT and empirical therapy has never been compared. A pooled mean analysis including four observational studies (283 patients) showed intention-to-treat and per-protocol eradication rates with SGT of 72% (95% confidence interval 56–87%; I2: 92%) and 80% (95% confidence interval 71–90%; I2: 80%), respectively. Conclusions: SGT may be an acceptable option as rescue treatment. However, cure rates are, at best, moderate and this approach has never been compared with a well-devised empirical therapy. The evidence in favor of SGT as rescue therapy is currently insufficient to recommend its use. PMID:27366212

  6. Comparison of the lifting-line free vortex wake method and the blade-element-momentum theory regarding the simulated loads of multi-MW wind turbines

    NASA Astrophysics Data System (ADS)

    Hauptmann, S.; Bülk, M.; Schön, L.; Erbslöh, S.; Boorsma, K.; Grasso, F.; Kühn, M.; Cheng, P. W.

    2014-12-01

    Design load simulations for wind turbines are traditionally based on the blade- element-momentum theory (BEM). The BEM approach is derived from a simplified representation of the rotor aerodynamics and several semi-empirical correction models. A more sophisticated approach to account for the complex flow phenomena on wind turbine rotors can be found in the lifting-line free vortex wake method. This approach is based on a more physics based representation, especially for global flow effects. This theory relies on empirical correction models only for the local flow effects, which are associated with the boundary layer of the rotor blades. In this paper the lifting-line free vortex wake method is compared to a state- of-the-art BEM formulation with regard to aerodynamic and aeroelastic load simulations of the 5MW UpWind reference wind turbine. Different aerodynamic load situations as well as standardised design load cases that are sensitive to the aeroelastic modelling are evaluated in detail. This benchmark makes use of the AeroModule developed by ECN, which has been coupled to the multibody simulation code SIMPACK.

  7. Establishing Clonal Cell Lines with Endothelial-Like Potential from CD9hi, SSEA-1− Cells in Embryonic Stem Cell-Derived Embryoid Bodies

    PubMed Central

    Lian, Qizhou; Yeo, KengSuan; Que, Jianwen; Tan, EileenKhiaWay; Yu, Fenggang; Yin, Yijun; Salto-Tellez, Manuel; Oakley, Reida Menshawe El; Lim, Sai-Kiang

    2006-01-01

    Background Differentiation of embryonic stem cells (ESCs) into specific cell types with minimal risk of teratoma formation could be efficiently directed by first reducing the differentiation potential of ESCs through the generation of clonal, self-renewing lineage-restricted stem cell lines. Efforts to isolate these stem cells are, however, mired in an impasse where the lack of purified lineage-restricted stem cells has hindered the identification of defining markers for these rare stem cells and, in turn, their isolation. Methodology/Principal Findings We describe here a method for the isolation of clonal lineage-restricted cell lines with endothelial potential from ESCs through a combination of empirical and rational evidence-based methods. Using an empirical protocol that we have previously developed to generate embryo-derived RoSH lines with endothelial potential, we first generated E-RoSH lines from mouse ESC-derived embryoid bodies (EBs). Despite originating from different mouse strains, RoSH and E- RoSH lines have similar gene expression profiles (r2 = 0.93) while that between E-RoSH and ESCs was 0.83. In silico gene expression analysis predicted that like RoSH cells, E-RoSH cells have an increased propensity to differentiate into vasculature. Unlike their parental ESCs, E-RoSH cells did not form teratomas and differentiate efficiently into endothelial-like cells in vivo and in vitro. Gene expression and FACS analysis revealed that RoSH and E-RoSH cells are CD9hi, SSEA-1− while ESCs are CD9lo, SSEA-1+. Isolation of CD9hi, SSEA-1− cells that constituted 1%–10% of EB-derived cultures generated an E-RoSH-like culture with an identical E-RoSH-like gene expression profile (r2 = 0.95) and a propensity to differentiate into endothelial-like cells. Conclusions By combining empirical and rational evidence-based methods, we identified definitive selectable surface antigens for the isolation and propagation of lineage-restricted stem cells with endothelial-like potential from mouse ESCs. PMID:17183690

  8. The detailed balance requirement and general empirical formalisms for continuum absorption

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Tipping, R. H.

    1994-01-01

    Two general empirical formalisms are presented for the spectral density which take into account the deviations from the Lorentz line shape in the wing regions of resonance lines. These formalisms satisfy the detailed balance requirement. Empirical line shape functions, which are essential to provide the continuum absorption at different temperatures in various frequency regions for atmospheric transmission codes, can be obtained by fitting to experimental data.

  9. Method and Apparatus for the Portable Identification Of Material Thickness And Defects Along Uneven Surfaces Using Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Reilly, Thomas L. (Inventor); Jacobstein, A. Ronald (Inventor); Cramer, K. Elliott (Inventor)

    2006-01-01

    A method and apparatus for testing a material such as the water-wall tubes in boilers includes the use of a portable thermal line heater having radiation shields to control the amount of thermal radiation that reaches a thermal imager. A procedure corrects for variations in the initial temperature of the material being inspected. A method of calibrating the testing device to determine an equation relating thickness of the material to temperatures created by the thermal line heater uses empirical data derived from tests performed on test specimens for each material type, geometry, density, specific heat, speed at which the line heater is moved across the material and heat intensity.

  10. On the Deduction of Galactic Abundances with Evolutionary Neural Networks

    NASA Astrophysics Data System (ADS)

    Taylor, M.; Diaz, A. I.

    2007-12-01

    A growing number of indicators are now being used with some confidence to measure the metallicity(Z) of photoionisation regions in planetary nebulae, galactic HII regions(GHIIRs), extra-galactic HII regions(EGHIIRs) and HII galaxies(HIIGs). However, a universal indicator valid also at high metallicities has yet to be found. Here, we report on a new artificial intelligence-based approach to determine metallicity indicators that shows promise for the provision of improved empirical fits. The method hinges on the application of an evolutionary neural network to observational emission line data. The network's DNA, encoded in its architecture, weights and neuron transfer functions, is evolved using a genetic algorithm. Furthermore, selection, operating on a set of 10 distinct neuron transfer functions, means that the empirical relation encoded in the network solution architecture is in functional rather than numerical form. Thus the network solutions provide an equation for the metallicity in terms of line ratios without a priori assumptions. Tapping into the mathematical power offered by this approach, we applied the network to detailed observations of both nebula and auroral emission lines from 0.33μ m-1μ m for a sample of 96 HII-type regions and we were able to obtain an empirical relation between Z and S_{23} with a dispersion of only 0.16 dex. We show how the method can be used to identify new diagnostics as well as the nonlinear relationship supposed to exist between the metallicity Z, ionisation parameter U and effective (or equivalent) temperature T*.

  11. Quantum chemical calculations for polymers and organic compounds

    NASA Technical Reports Server (NTRS)

    Lopez, J.; Yang, C.

    1982-01-01

    The relativistic effects of the orbiting electrons on a model compound were calculated. The computational method used was based on 'Modified Neglect of Differential Overlap' (MNDO). The compound tetracyanoplatinate was used since empirical measurement and calculations along "classical" lines had yielded many known properties. The purpose was to show that for large molecules relativity effects could not be ignored and that these effects could be calculated and yield data in closer agreement to empirical measurements. Both the energy band structure and molecular orbitals are depicted.

  12. Sensitivity analysis for simulating pesticide impacts on honey bee colonies

    EPA Science Inventory

    Background/Question/Methods Regulatory agencies assess risks to honey bees from pesticides through a tiered process that includes predictive modeling with empirical toxicity and chemical data of pesticides as a line of evidence. We evaluate the Varroapop colony model, proposed by...

  13. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line.

    PubMed

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-09-16

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.

  14. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line

    PubMed Central

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-01-01

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF2) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach. PMID:28926953

  15. An experimental system for spectral line ratio measurements in the TJ-II stellarator.

    PubMed

    Zurro, B; Baciero, A; Fontdecaba, J M; Peláez, R; Jiménez-Rey, D

    2008-10-01

    The chord-integrated emissions of spectral lines have been monitored in the TJ-II stellarator by using a spectral system with time and space scanning capabilities and relative calibration over the entire UV-visible spectral range. This system has been used to study the line ratio of lines of different ionization stages of carbon (C(5+) 5290 A and C(4+) 2271 A) for plasma diagnostic purposes. The local emissivity of these ions has been reconstructed, for quasistationary profiles, by means of the inversion Fisher method described previously. The experimental line ratio is being empirically studied and in parallel a simple spectroscopic model has been developed to account for that ratio. We are investigating whether the role played by charge exchange processes with neutrals and the existence of non-Maxwellian electrons, intrinsic to Electron Cyclotron Resonance Heating (ECRH) heating, leave any distinguishable mark on this diagnostic method.

  16. Stage line diagram: an age-conditional reference diagram for tracking development.

    PubMed

    van Buuren, Stef; Ooms, Jeroen C L

    2009-05-15

    This paper presents a method for calculating stage line diagrams, a novel type of reference diagram useful for tracking developmental processes over time. Potential fields of applications include: dentistry (tooth eruption), oncology (tumor grading, cancer staging), virology (HIV infection and disease staging), psychology (stages of cognitive development), human development (pubertal stages) and chronic diseases (stages of dementia). Transition probabilities between successive stages are modeled as smoothly varying functions of age. Age-conditional references are calculated from the modeled probabilities by the mid-P value. It is possible to eliminate the influence of age by calculating standard deviation scores (SDS). The method is applied to the empirical data to produce reference charts on secondary sexual maturation. The mean of the empirical SDS in the reference population is close to zero, whereas the variance depends on age. The stage line diagram provides quick insight into both status (in SDS) and tempo (in SDS/year) of development of an individual child. Other measures (e.g. height SDS, body mass index SDS) from the same child can be added to the chart. Diagrams for sexual maturation are available as a web application at http://vps.stefvanbuuren.nl/puberty. The stage line diagram expresses status and tempo of discrete changes on a continuous scale. Wider application of these measures scores opens up new analytic possibilities. (c) 2009 John Wiley & Sons, Ltd.

  17. A new method to determine the interstellar reddening towards WN stars

    NASA Technical Reports Server (NTRS)

    Conti, Peter S.; Morris, Patrick W.

    1990-01-01

    An empirical approach to determine the redding in WN stars is presented, in which the measured strengths of the emission lines of He II at 1640 and 4686 A are used to estimate the extinction. The He II emission lines at these wavelengths are compared for a number of WN stars in the Galaxy and the LMC. It is shown that the equivalent width ratios are single valued and are independent of the spectral subtypes. The reddening for stars in the Galaxy is derived using a Galactic extinction law and observed line flux ratios, showing good agreement with previous determinations of reddening. The possible application of the method to study the absorption properties of the interstellar medium in more distant galaxies is discussed.

  18. 4. VIEW OF EMPIRE, STONE CABIN AND TIP TOP MINES. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. VIEW OF EMPIRE, STONE CABIN AND TIP TOP MINES. EMPIRE TAILING PILE IS VISIBLE IN LOWER CENTER (SLOPE WITH ORE CHUTE IS HIDDEN BY TREES ABOVE TAILINGS), TIP TOP IS VISIBLE IN RIGHT THIRD AND SLIGHTLY UPHILL IN ELEVATION FROM UPPER EMPIRE TAILINGS,(TO LOCATE, FIND THE V-SHAPED SPOT OF SNOW JUST BELOW THE RIDGE LINE ON FAR RIGHT OF IMAGE. TIP TOP BUILDING IS VISIBLE IN THE LIGHT AREA BELOW AND SLIGHTLY LEFT OF V-SHAPED SNOW SPOT), AND STONE CABIN II IS ALSO VISIBLE, (TO LOCATE, USE A STRAIGHT EDGE AND ALIGN WITH EMPIRE TAILINGS. THIS WILL DIRECT ONE THROUGH THE EDGE OF STONE CABIN II, WHICH IS THE DARK SPOT JUST BELOW THE POINT WHERE THE RIDGE LINE TREES STOP). STONE CABIN I IS LOCATED IN GENERAL VICINITY OF THE LONE TREE ON FAR LEFT RIDGE LINE. ... - Florida Mountain Mining Sites, Silver City, Owyhee County, ID

  19. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    PubMed

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  20. Absorption line indices in the UV. I. Empirical and theoretical stellar population models

    NASA Astrophysics Data System (ADS)

    Maraston, C.; Nieves Colmenárez, L.; Bender, R.; Thomas, D.

    2009-01-01

    Aims: Stellar absorption lines in the optical (e.g. the Lick system) have been extensively studied and constitute an important stellar population diagnostic for galaxies in the local universe and up to moderate redshifts. Proceeding towards higher look-back times, galaxies are younger and the ultraviolet becomes the relevant spectral region where the dominant stellar populations shine. A comprehensive study of ultraviolet absorption lines of stellar population models is however still lacking. With this in mind, we study absorption line indices in the far and mid-ultraviolet in order to determine age and metallicity indicators for UV-bright stellar populations in the local universe as well as at high redshift. Methods: We explore empirical and theoretical spectral libraries and use evolutionary population synthesis to compute synthetic line indices of stellar population models. From the empirical side, we exploit the IUE-low resolution library of stellar spectra and system of absorption lines, from which we derive analytical functions (fitting functions) describing the strength of stellar line indices as a function of gravity, temperature and metallicity. The fitting functions are entered into an evolutionary population synthesis code in order to compute the integrated line indices of stellar populations models. The same line indices are also directly evaluated on theoretical spectral energy distributions of stellar population models based on Kurucz high-resolution synthetic spectra, In order to select indices that can be used as age and/or metallicity indicators for distant galaxies and globular clusters, we compare the models to data of template globular clusters from the Magellanic Clouds with independently known ages and metallicities. Results: We provide synthetic line indices in the wavelength range ~1200 Å to ~3000 Å for stellar populations of various ages and metallicities.This adds several new indices to the already well-studied CIV and SiIV absorptions. Based on the comparison with globular cluster data, we select a set of 11 indices blueward of the 2000 Å rest-frame that allows us to recover well the ages and the metallicities of the clusters. These indices are ideal to study ages and metallicities of young galaxies at high redshift. We also provide the synthetic high-resolution stellar population SEDs.

  1. Microfluidic platform for optimization of crystallization conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Shuheng; Gerard, Charline J. J.; Ikni, Aziza; Ferry, Gilles; Vuillard, Laurent M.; Boutin, Jean A.; Ferte, Nathalie; Grossier, Romain; Candoni, Nadine; Veesler, Stéphane

    2017-08-01

    We describe a universal, high-throughput droplet-based microfluidic platform for crystallization. It is suitable for a multitude of applications, due to its flexibility, ease of use, compatibility with all solvents and low cost. The platform offers four modular functions: droplet formation, on-line characterization, incubation and observation. We use it to generate droplet arrays with a concentration gradient in continuous long tubing, without using surfactant. We control droplet properties (size, frequency and spacing) in long tubing by using hydrodynamic empirical relations. We measure droplet chemical composition using both an off-line and a real-time on-line method. Applying this platform to a complicated chemical environment, membrane proteins, we successfully handle crystallization, suggesting that the platform is likely to perform well in other circumstances. We validate the platform for fine-gradient screening and optimization of crystallization conditions. Additional on-line detection methods may well be integrated into this platform in the future, for instance, an on-line diffraction technique. We believe this method could find applications in fields such as fluid interaction engineering, live cell study and enzyme kinetics.

  2. Accurate ab initio dipole moment surfaces of ozone: First principle intensity predictions for rotationally resolved spectra in a large range of overtone and combination bands.

    PubMed

    Tyuterev, Vladimir G; Kochanov, Roman V; Tashkun, Sergey A

    2017-02-14

    Ab initio dipole moment surfaces (DMSs) of the ozone molecule are computed using the MRCI-SD method with AVQZ, AV5Z, and VQZ-F12 basis sets on a dense grid of about 1950 geometrical configurations. The analytical DMS representation used for the fit of ab initio points provides better behavior for large nuclear displacements than that of previous studies. Various DMS models were derived and tested. Vibration-rotation line intensities of 16 O 3 were calculated from these ab initio surfaces by the variational method using two different potential functions determined in our previous works. For the first time, a very good agreement of first principle calculations with the experiment was obtained for the line-by-line intensities in rotationally resolved ozone spectra in a large far- and mid-infrared range. This includes high overtone and combination bands up to ΔV = 6. A particular challenge was a correct description of the B-type bands (even ΔV 3 values) that represented major difficulties for the previous ab initio investigations and for the empirical spectroscopic models. The major patterns of various B-type bands were correctly described without empirically adjusted dipole moment parameters. For the 10 μm range, which is of key importance for the atmospheric ozone retrievals, our ab initio intensity results are within the experimental error margins. The theoretical values for the strongest lines of the ν 3 band lie in general between two successive versions of HITRAN (HIgh-resolution molecular TRANsmission) empirical database that corresponded to most extended available sets of observations. The overall qualitative agreement in a large wavenumber range for rotationally resolved cold and hot ozone bands up to about 6000 cm -1 is achieved here for the first time. These calculations reveal that several weak bands are yet missing from available spectroscopic databases.

  3. Post-Qualitative Line of Flight and the Confabulative Conversation: A Methodological Ethnography

    ERIC Educational Resources Information Center

    Johansson, Lotta

    2016-01-01

    This paper is a methodological ethnography aiming to highlight the difficulties in using conventional methods in connection with an explorative philosophy: Deleuze and Guattari's. Taking an empirical point of departure in conversations about the future with students in upper secondary school, the struggle to find a scientifically valid label…

  4. Fringe-projection profilometry based on two-dimensional empirical mode decomposition.

    PubMed

    Zheng, Suzhen; Cao, Yiping

    2013-11-01

    In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.

  5. The methane absorption spectrum near 1.73 μm (5695-5850 cm-1): Empirical line lists at 80 K and 296 K and rovibrational assignments

    NASA Astrophysics Data System (ADS)

    Ghysels, M.; Mondelain, D.; Kassi, S.; Nikitin, A. V.; Rey, M.; Campargue, A.

    2018-07-01

    The methane absorption spectrum is studied at 297 K and 80 K in the center of the Tetradecad between 5695 and 5850 cm-1. The spectra are recorded by differential absorption spectroscopy (DAS) with a noise equivalent absorption of about αmin≈ 1.5 × 10-7 cm-1. Two empirical line lists are constructed including about 4000 and 2300 lines at 297 K and 80 K, respectively. Lines due to 13CH4 present in natural abundance were identified by comparison with a spectrum of pure 13CH4 recorded in the same temperature conditions. About 1700 empirical values of the lower state energy level, Eemp, were derived from the ratios of the line intensities at 80 K and 296 K. They provide accurate temperature dependence for most of the absorption in the region (93% and 82% at 80 K and 296 K, respectively). The quality of the derived empirical values is illustrated by the clear propensity of the corresponding lower state rotational quantum number, Jemp, to be close to integer values. Using an effective Hamiltonian model derived from a previously published ab initio potential energy surface, about 2060 lines are rovibrationnally assigned, adding about 1660 new assignments to those provided in the HITRAN database for 12CH4 in the region.

  6. Observations and NLTE modeling of Ellerman bombs

    NASA Astrophysics Data System (ADS)

    Berlicki, A.; Heinzel, P.

    2014-07-01

    Context. Ellerman bombs (EBs) are short-lived, compact, and spatially well localized emission structures that are observed well in the wings of the hydrogen Hα line. EBs are also observed in the chromospheric CaII lines and in UV continua as bright points located within active regions. Hα line profiles of EBs show a deep absorption at the line center and enhanced emission in the line wings with maxima around ±1 Å from the line center. Similar shapes of the line profiles are observed for the CaII IR line at 8542 Å. In CaII H and K lines the emission peaks are much stronger, and EBs emission is also enhanced in the line center. Aims: It is generally accepted that EBs may be considered as compact microflares located in lower solar atmosphere that contribute to the heating of these low-lying regions, close to the temperature minimum of the atmosphere. However, it is still not clear where exactly the emission of EBs is formed in the solar atmosphere. High-resolution spectrophotometric observations of EBs were used for determining of their physical parameters and construction of semi-empirical models. Obtained models allow us to determine the position of EBs in the solar atmosphere, as well as the vertical structure of the activated EB atmosphere Methods: In our analysis we used observations of EBs obtained in the Hα and CaII H lines with the Dutch Open Telescope (DOT). These one-hour long simultaneous sequences obtained with high temporal and spatial resolution were used to determine the line emissions. To analyze them, we used NLTE numerical codes for the construction of grids of 243 semi-empirical models simulating EBs structures. In this way, the observed emission could be compared with the synthetic line spectra calculated for all such models. Results: For a specific model we found reasonable agreement between the observed and theoretical emission and thus we consider such model as a good approximation to EBs atmospheres. This model is characterized by an enhanced temperature in the lower chromosphere and can be considered as a compact structure (hot spot), which is responsible for the emission observed in the wings of chromospheric lines, in particular in the Hα and CaII H lines. Conclusions: For the first time the set of two lines Hα and CaII H was used to construct semi-empirical models of EBs. Our analysis shows that EBs can be described by a "hot spot" model, with the temperature and/or density increase through a few hundred km atmospheric structure. We confirmed that EBs are located close to the temperature minimum or in the lower chromosphere. Two spectral features (lines in our case), observed simultaneously, significantly strengthen the constraints on a realistic model.

  7. Noise radiation directivity from a wind-tunnel inlet with inlet vanes and duct wall linings

    NASA Technical Reports Server (NTRS)

    Soderman, P. T.; Phillips, J. D.

    1986-01-01

    The acoustic radiation patterns from a 1/15th scale model of the Ames 80- by 120-Ft Wind Tunnel test section and inlet have been measured with a noise source installed in the test section. Data were acquired without airflow in the duct. Sound-absorbent inlet vanes oriented parallel to each other, or splayed with a variable incidence relative to the duct long axis, were evaluated along with duct wall linings. Results show that splayed vans tend to spread the sound to greater angles than those measured with the open inlet. Parallel vanes narrowed the high-frequency radiation pattern. Duct wall linings had a strong effect on acoustic directivity by attenuating wall reflections. Vane insertion loss was measured. Directivity results are compared with existing data from square ducts. Two prediction methods for duct radiation directivity are described: one is an empirical method based on the test data, and the other is a analytical method based on ray acoustics.

  8. BOND: A quantum of solace for nebular abundance determinations

    NASA Astrophysics Data System (ADS)

    Vale Asari, N.; Stasińska, G.; Morisset, C.; Cid Fernandes, R.

    2017-11-01

    The abundances of chemical elements other than hydrogen and helium in a galaxy are the fossil record of its star formation history. Empirical relations such as mass-metallicity relation are thus seen as guides for studies on the history and chemical evolution of galaxies. Those relations usually rely on nebular metallicities measured with strong-line methods, which assume that H II regions are a one- (or at most two-) parameter family where the oxygen abundance is the driving quantity. Nature is however much more complex than that, and metallicities from strong lines may be strongly biased. We have developed the method BOND (Bayesian Oxygen and Nitrogen abundance Determinations) to simultaneously derive oxygen and nitrogen abundances in giant H II regions by comparing strong and semi-strong observed emission lines to a carefully-defined, finely-meshed grid of photoionization models. Our code and results are public and available at http://bond.ufsc.br.

  9. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  10. Semi-empirical studies of atomic structure. Progress report, 1 July 1982-1 February 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, L.J.

    1983-01-01

    A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast-ion-beam excitation with semi-empirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems. Through themore » acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less

  11. Moral Stress, Moral Practice, and Ethical Climate in Community-Based Drug-Use Research: Views From the Front Line

    PubMed Central

    Fisher, Celia B.; True, Gala; Alexander, Leslie; Fried, Adam L.

    2016-01-01

    Background The role of front-line researchers, those whose responsibilities include face-to-face contact with participants, is critical to ensuring the responsible conduct of community-based drug use research. To date, there has been little empirical examination of how front-line researchers perceive the effectiveness of ethical procedures in their real-world application and the moral stress they may experience when adherence to scientific procedures appears to conflict with participant protections. Methods This study represents a first step in applying psychological science to examine the work-related attitudes, ethics climate, and moral dilemmas experienced by a national sample of 275 front-line staff members whose responsibilities include face-to-face interaction with participants in community-based drug-use research. Using an anonymous Web-based survey we psychometrically evaluated and examined relationships among six new scales tapping moral stress (frustration in response to perceived barriers to conducting research in a morally appropriate manner); organizational ethics climate; staff support; moral practice dilemmas (perceived conflicts between scientific integrity and participant welfare); research commitment; and research mistrust. Results As predicted, front-line researchers who evidence a strong commitment to their role in the research process and who perceive their organizations as committed to research ethics and staff support experienced lower levels of moral stress. Front-line researchers who were distrustful of the research enterprise and frequently grappled with moral practice dilemmas reported higher levels of moral stress. Conclusion Applying psychometrically reliable scales to empirically examine research ethics challenges can illuminate specific threats to scientific integrity and human subjects protections encountered by front-line staff and suggest organizational strategies for reducing moral stress and enhancing the responsible conduct of research. PMID:27795869

  12. A protocol for the creation of useful geometric shape metrics illustrated with a newly derived geometric measure of leaf circularity.

    PubMed

    Krieger, Jonathan D

    2014-08-01

    I present a protocol for creating geometric leaf shape metrics to facilitate widespread application of geometric morphometric methods to leaf shape measurement. • To quantify circularity, I created a novel shape metric in the form of the vector between a circle and a line, termed geometric circularity. Using leaves from 17 fern taxa, I performed a coordinate-point eigenshape analysis to empirically identify patterns of shape covariation. I then compared the geometric circularity metric to the empirically derived shape space and the standard metric, circularity shape factor. • The geometric circularity metric was consistent with empirical patterns of shape covariation and appeared more biologically meaningful than the standard approach, the circularity shape factor. The protocol described here has the potential to make geometric morphometrics more accessible to plant biologists by generalizing the approach to developing synthetic shape metrics based on classic, qualitative shape descriptors.

  13. Performance Monitoring Of A Computer Numerically Controlled (CNC) Lathe Using Pattern Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Daneshmend, L. K.; Pak, H. A.

    1984-02-01

    On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.

  14. A method and data for video monitor sizing. [human CRT viewing requirements

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.; Guerin, E. G.

    1976-01-01

    The paper outlines an approach consisting of using analytical methods and empirical data to determine monitor size constraints based on the human operator's CRT viewing requirements in a context where panel space and volume considerations for the Space Shuttle aft cabin constrain the size of the monitor to be used. Two cases are examined: remote scene imaging and alphanumeric character display. The central parameter used to constrain monitor size is the ratio M/L where M is the monitor dimension and L the viewing distance. The study is restricted largely to 525 line video systems having an SNR of 32 db and bandwidth of 4.5 MHz. Degradation in these parameters would require changes in the empirically determined visual angle constants presented. The data and methods described are considered to apply to cases where operators are required to view via TV target objects which are well differentiated from the background and where the background is relatively sparse. It is also necessary to identify the critical target dimensions and cues.

  15. Semi-Empirical Validation of the Cross-Band Relative Absorption Technique for the Measurement of Molecular Mixing Ratios

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S

    2013-01-01

    Studies were performed to carry out semi-empirical validation of a new measurement approach we propose for molecular mixing ratios determination. The approach is based on relative measurements in bands of O2 and other molecules and as such may be best described as cross band relative absorption (CoBRA). . The current validation studies rely upon well verified and established theoretical and experimental databases, satellite data assimilations and modeling codes such as HITRAN, line-by-line radiative transfer model (LBLRTM), and the modern-era retrospective analysis for research and applications (MERRA). The approach holds promise for atmospheric mixing ratio measurements of CO2 and a variety of other molecules currently under investigation for several future satellite lidar missions. One of the advantages of the method is a significant reduction of the temperature sensitivity uncertainties which is illustrated with application to the ASCENDS mission for the measurement of CO2 mixing ratios (XCO2). Additional advantages of the method include the possibility to closely match cross-band weighting function combinations which is harder to achieve using conventional differential absorption techniques and the potential for additional corrections for water vapor and other interferences without using the data from numerical weather prediction (NWP) models.

  16. ζ Oph and the weak-wind problem

    NASA Astrophysics Data System (ADS)

    Gvaramadze, V. V.; Langer, N.; Mackey, J.

    2012-11-01

    Mass-loss rate, ?, is one of the key parameters affecting evolution and observational manifestations of massive stars and their impact on the ambient medium. Despite its importance, there is a factor of ˜100 discrepancy between empirical and theoretical ? of late-type O dwarfs, the so-called weak-wind problem. In this Letter, we propose a simple novel method to constrain ? of runaway massive stars through observation of their bow shocks and Strömgren spheres, which might be of decisive importance for resolving the weak-wind problem. Using this method, we found that ? of the well-known runaway O9.5 V star ζ Oph is more than an order of magnitude higher than that derived from ultraviolet (UV) line fitting and is by a factor of 6-7 lower than those based on the theoretical recipe by Vink et al. and the Hα line. The discrepancy between ? derived by our method and that based on UV lines would be even more severe if the stellar wind is clumpy. At the same time, our estimate of ? agrees with that predicted by the moving reversing layer theory by Lucy.

  17. An Analysis Method for Superconducting Resonator Parameter Extraction with Complex Baseline Removal

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe

    2014-01-01

    A new semi-empirical model is proposed for extracting the quality (Q) factors of arrays of superconducting microwave kinetic inductance detectors (MKIDs). The determination of the total internal and coupling Q factors enables the computation of the loss in the superconducting transmission lines. The method used allows the simultaneous analysis of multiple interacting discrete resonators with the presence of a complex spectral baseline arising from reflections in the system. The baseline removal allows an unbiased estimate of the device response as measured in a cryogenic instrumentation setting.

  18. Optimum wall impedance for spinning modes: A correlation with mode cut-off ratio

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1978-01-01

    A correlating equation relating the optimum acoustic impedance for the wall lining of a circular duct to the acoustic mode cut-off ratio, is presented. The optimum impedance was correlated with cut-off ratio because the cut-off ratio appears to be the fundamental parameter governing the propagation of sound in the duct. Modes with similar cut-off ratios respond in a similar way to the acoustic liner. The correlation is a semi-empirical expression developed from an empirical modification of an equation originally derived from sound propagation theory in a thin boundary layer. This correlating equation represents a part of a simplified liner design method, based upon modal cut-off ratio, for multimodal noise propagation.

  19. Evidence-based ethics? On evidence-based practice and the "empirical turn" from normative bioethics

    PubMed Central

    Goldenberg, Maya J

    2005-01-01

    Background The increase in empirical methods of research in bioethics over the last two decades is typically perceived as a welcomed broadening of the discipline, with increased integration of social and life scientists into the field and ethics consultants into the clinical setting, however it also represents a loss of confidence in the typical normative and analytic methods of bioethics. Discussion The recent incipiency of "Evidence-Based Ethics" attests to this phenomenon and should be rejected as a solution to the current ambivalence toward the normative resolution of moral problems in a pluralistic society. While "evidence-based" is typically read in medicine and other life and social sciences as the empirically-adequate standard of reasonable practice and a means for increasing certainty, I propose that the evidence-based movement in fact gains consensus by displacing normative discourse with aggregate or statistically-derived empirical evidence as the "bottom line". Therefore, along with wavering on the fact/value distinction, evidence-based ethics threatens bioethics' normative mandate. The appeal of the evidence-based approach is that it offers a means of negotiating the demands of moral pluralism. Rather than appealing to explicit values that are likely not shared by all, "the evidence" is proposed to adjudicate between competing claims. Quantified measures are notably more "neutral" and democratic than liberal markers like "species normal functioning". Yet the positivist notion that claims stand or fall in light of the evidence is untenable; furthermore, the legacy of positivism entails the quieting of empirically non-verifiable (or at least non-falsifiable) considerations like moral claims and judgments. As a result, evidence-based ethics proposes to operate with the implicit normativity that accompanies the production and presentation of all biomedical and scientific facts unchecked. Summary The "empirical turn" in bioethics signals a need for reconsideration of the methods used for moral evaluation and resolution, however the options should not include obscuring normative content by seemingly neutral technical measure. PMID:16277663

  20. GAME: GAlaxy Machine learning for Emission lines

    NASA Astrophysics Data System (ADS)

    Ucci, G.; Ferrara, A.; Pallottini, A.; Gallerani, S.

    2018-06-01

    We present an updated, optimized version of GAME (GAlaxy Machine learning for Emission lines), a code designed to infer key interstellar medium physical properties from emission line intensities of ultraviolet /optical/far-infrared galaxy spectra. The improvements concern (a) an enlarged spectral library including Pop III stars, (b) the inclusion of spectral noise in the training procedure, and (c) an accurate evaluation of uncertainties. We extensively validate the optimized code and compare its performance against empirical methods and other available emission line codes (PYQZ and HII-CHI-MISTRY) on a sample of 62 SDSS stacked galaxy spectra and 75 observed HII regions. Very good agreement is found for metallicity. However, ionization parameters derived by GAME tend to be higher. We show that this is due to the use of too limited libraries in the other codes. The main advantages of GAME are the simultaneous use of all the measured spectral lines and the extremely short computational times. We finally discuss the code potential and limitations.

  1. Using Empirical Models for Communication Prediction of Spacecraft

    NASA Technical Reports Server (NTRS)

    Quasny, Todd

    2015-01-01

    A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.

  2. An improved method for predicting the lightning performance of high and extra-high-voltage substation shielding

    NASA Astrophysics Data System (ADS)

    Vinh, T.

    1980-08-01

    There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.

  3. Accelerated reliability testing of highly aligned single-walled carbon nanotube networks subjected to DC electrical stressing.

    PubMed

    Strus, Mark C; Chiaramonti, Ann N; Kim, Young Lae; Jung, Yung Joon; Keller, Robert R

    2011-07-01

    We investigate the electrical reliability of nanoscale lines of highly aligned, networked, metallic/semiconducting single-walled carbon nanotubes (SWCNTs) fabricated through a template-based fluidic assembly process. We find that these SWCNT networks can withstand DC current densities larger than 10 MA cm(-2) for several hours and, in some cases, several days. We develop test methods that show that the degradation rate, failure predictability and total device lifetime can be linked to the initial resistance. Scanning electron and transmission electron microscopy suggest that fabrication variability plays a critical role in the rate of degradation, and we offer an empirical method of quickly determining the long-term performance of a network. We find that well-fabricated lines subject to constant electrical stress show a linear accumulation of damage reminiscent of electromigration in metallic interconnects, and we explore the underlying physical mechanisms that could cause such behavior.

  4. Study of galaxies in the Lynx-Cancer void - VII. New oxygen abundances

    NASA Astrophysics Data System (ADS)

    Pustilnik, S. A.; Perepelitsyna, Y. A.; Kniazev, A. Y.

    2016-11-01

    We present new or improved oxygen abundances (O/H) for the nearby Lynx-Cancer void updated galaxy sample. They are obtained via the SAO 6-m telescope spectroscopy (25 objects), or derived from the Sloan Digital Sky Survey spectra (14 galaxies, of which for seven objects O/H values were unknown). For eight galaxies with detected [O III] λ4363 line, O/H values are derived via the direct (Te) method. For the remaining objects, O/H was estimated via semi-empirical and empirical methods. For all accumulated O/H data for 81 galaxies of this void (with 40 of them derived via Te method), their relation `O/H versus MB' is compared with that for similar late-type galaxies from denser environments (the Local Volume `reference sample'). We confirm our previous conclusion derived for a subsample of 48 objects: void galaxies show systematically reduced O/H for the same luminosity with respect to the reference sample, in average by 0.2 dex, or by a factor of ˜1.6. Moreover, we confirm the fraction of ˜20 per cent of strong outliers, with O/H of two to four times lower than the typical values for the `reference' sample. The new data are consistent with the conclusion on the slower evolution of the main void galaxy population. We obtained Hα velocity for the faint optical counterpart of the most gas-rich (M(H I)/LB = 25) void object J0723+3624, confirming its connection with the respective H I blob. For similar extremely gas-rich dwarf J0706+3020, we give a tentative O/H ˜(O/H)⊙/45. In Appendix A, we present the results of calibration of semi-empirical method by Izotov & Thuan and of empirical calibrators by Pilyugin & Thuan and Yin et al. on the sample of ˜150 galaxies from the literature with O/H measured by Te method.

  5. An Empirical Spectroscopic Database for Acetylene in the Regions of 5850-9415 CM^{-1}

    NASA Astrophysics Data System (ADS)

    Campargue, Alain; Lyulin, Oleg

    2017-06-01

    Six studies have been recently devoted to a systematic analysis of the high-resolution near infrared absorption spectrum of acetylene recorded by Cavity Ring Down spectroscopy (CRDS) in Grenoble and by Fourier-transform spectroscopy (FTS) in Brussels and Hefei. On the basis of these works, in the present contribution, we construct an empirical database for acetylene in the 5850 - 9415 \\wn region excluding the 6341-7000 \\wn interval corresponding to the very strong νb{1}+ νb{3} manifold. The database gathers and extends information included in our CRDS and FTS studies. In particular, the intensities of about 1700 lines measured by CRDS in the 7244-7920 \\wn are reported for the first time together with those of several bands of ^{12}C^{13}CH_{2} present in natural isotopic abundance in the acetylene sample. The Herman-Wallis coefficients of most of the bands are derived from a fit of the measured intensity values. A recommended line list is provided with positions calculated using empirical spectroscopic parameters of the lower and upper energy vibrational levels and intensities calculated using the derived Herman-Wallis coefficients. This approach allows completing the experimental list by adding missing lines and improving poorly determined positions and intensities. As a result the constructed line list includes a total of 10973 lines belonging to 146 bands of ^{12}C_{2}H_{2} and 29 bands of ^{12}C^{13}CH_{2}. For comparison the HITRAN2012 database in the same region includes 869 lines of 14 bands, all belonging to ^{12}C_{2}H_{2}. Our weakest lines have an intensity on the order of 10^{-29} cm/molecule,about three orders of magnitude smaller than the HITRAN intensity cut off. Line profile parameters are added to the line list which is provided in HITRAN format. The comparison to the HITRAN2012 line list or to results obtained using the global effective operator approach is discussed in terms of completeness and accuracy.

  6. Determination of natural line widths of Kα X-ray lines for some elements in the atomic range 50≤Z≤65 at 59.5 keV

    NASA Astrophysics Data System (ADS)

    Kündeyi, Kadriye; Aylıkcı, Nuray Küp; Tıraşoǧlu, Engin; Kahoul, Abdelhalim; Aylıkcı, Volkan

    2017-02-01

    The semi-empirical determination of natural widths of Kα X-ray lines (Kα1 and Kα2) were performed for Sn, Sb, Te, I, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd and Tb. For the semi-empirical determination of the line widths, K shell fluorescence yields of elements were measured. The samples were excited by 59.5 keV γ rays from a 241Am annular radioactive source in order to measure the K shell fluorescence yields. The emitted K X-rays from the samples were counted by an Ultra-LEGe detector with a resolution of 150 eV at 5.9 keV. The measured K shell fluorescence yields were used for the calculation of K shell level widths. Finally, the natural widths of K X-ray lines were determined as the sums of levels which involved in the transition. The obtained values were compared with earlier studies.

  7. Calculation of voltages in electric power transmission lines during historic geomagnetic storms: An investigation using realistic earth impedances

    USGS Publications Warehouse

    Lucas, Greg M.; Love, Jeffrey J.; Kelbert, Anna

    2018-01-01

    Commonly, one-dimensional (1-D) Earth impedances have been used to calculate the voltages induced across electric power transmission lines during geomagnetic storms under the assumption that much of the three-dimensional structure of the Earth gets smoothed when integrating along power transmission lines. We calculate the voltage across power transmission lines in the mid-Atlantic region with both regional 1-D impedances and 64 empirical 3-D impedances obtained from a magnetotelluric survey. The use of 3-D impedances produces substantially more spatial variance in the calculated voltages, with the voltages being more than an order of magnitude different, both higher and lower, than the voltages calculated utilizing regional 1-D impedances. During the March 1989 geomagnetic storm 62 transmission lines exceed 100 V when utilizing empirical 3-D impedances, whereas 16 transmission lines exceed 100 V when utilizing regional 1-D impedances. This demonstrates the importance of using realistic impedances to understand and quantify the impact that a geomagnetic storm has on power grids.

  8. A comparison of four streamflow record extension techniques

    USGS Publications Warehouse

    Hirsch, Robert M.

    1982-01-01

    One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., ‘line of organic correlation,’ ‘reduced major axis,’ ‘unique solution,’ and ‘equivalence line.’ The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.

  9. A Comparison of Four Streamflow Record Extension Techniques

    NASA Astrophysics Data System (ADS)

    Hirsch, Robert M.

    1982-08-01

    One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., `line of organic correlation,' `reduced major axis,' `unique solution,' and `equivalence line.' The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.

  10. Chromospheric activity and rotation of FGK stars in the solar vicinity. An estimation of the radial velocity jitter

    NASA Astrophysics Data System (ADS)

    Martínez-Arnáiz, R.; Maldonado, J.; Montes, D.; Eiroa, C.; Montesinos, B.

    2010-09-01

    Context. Chromospheric activity produces both photometric and spectroscopic variations that can be mistaken as planets. Large spots crossing the stellar disc can produce planet-like periodic variations in the light curve of a star. These spots clearly affect the spectral line profiles, and their perturbations alter the line centroids creating a radial velocity jitter that might “contaminate” the variations induced by a planet. Precise chromospheric activity measurements are needed to estimate the activity-induced noise that should be expected for a given star. Aims: We obtain precise chromospheric activity measurements and projected rotational velocities for nearby (d ≤ 25 pc) cool (spectral types F to K) stars, to estimate their expected activity-related jitter. As a complementary objective, we attempt to obtain relationships between fluxes in different activity indicator lines, that permit a transformation of traditional activity indicators, i.e., Ca ii H & K lines, to others that hold noteworthy advantages. Methods: We used high resolution (~50 000) echelle optical spectra. Standard data reduction was performed using the IRAF echelle package. To determine the chromospheric emission of the stars in the sample, we used the spectral subtraction technique. We measured the equivalent widths of the chromospheric emission lines in the subtracted spectrum and transformed them into fluxes by applying empirical equivalent width and flux relationships. Rotational velocities were determined using the cross-correlation technique. To infer activity-related radial velocity (RV) jitter, we used empirical relationships between this jitter and the R'_HK index. Results: We measured chromospheric activity, as given by different indicators throughout the optical spectra, and projected rotational velocities for 371 nearby cool stars. We have built empirical relationships among the most important chromospheric emission lines. Finally, we used the measured chromospheric activity to estimate the expected RV jitter for the active stars in the sample. Based on observations made with the 2.2 m telescope at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto (Spain) and the Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Istituto Nazionale de Astrofisica Italiano (INAF), in the Spanish Observatorio del Roque de los Muchachos. This research has been supported by the Programa de Acceso a Infraestructuras Científicas y Tecnológicas Singulares (ICTS).Tables A1 to A4 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/520/A79

  11. POLARIZED LINE FORMATION IN NON-MONOTONIC VELOCITY FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sampoorna, M.; Nagendra, K. N., E-mail: sampoorna@iiap.res.in, E-mail: knn@iiap.res.in

    2016-12-10

    For a correct interpretation of the observed spectro-polarimetric data from astrophysical objects such as the Sun, it is necessary to solve the polarized line transfer problems taking into account a realistic temperature structure, the dynamical state of the atmosphere, a realistic scattering mechanism (namely, the partial frequency redistribution—PRD), and the magnetic fields. In a recent paper, we studied the effects of monotonic vertical velocity fields on linearly polarized line profiles formed in isothermal atmospheres with and without magnetic fields. However, in general the velocity fields that prevail in dynamical atmospheres of astrophysical objects are non-monotonic. Stellar atmospheres with shocks, multi-componentmore » supernova atmospheres, and various kinds of wave motions in solar and stellar atmospheres are examples of non-monotonic velocity fields. Here we present studies on the effect of non-relativistic non-monotonic vertical velocity fields on the linearly polarized line profiles formed in semi-empirical atmospheres. We consider a two-level atom model and PRD scattering mechanism. We solve the polarized transfer equation in the comoving frame (CMF) of the fluid using a polarized accelerated lambda iteration method that has been appropriately modified for the problem at hand. We present numerical tests to validate the CMF method and also discuss the accuracy and numerical instabilities associated with it.« less

  12. At-line process analytical technology (PAT) for more efficient scale up of biopharmaceutical microfiltration unit operations.

    PubMed

    Watson, Douglas S; Kerchner, Kristi R; Gant, Sean S; Pedersen, Joseph W; Hamburger, James B; Ortigosa, Allison D; Potgieter, Thomas I

    2016-01-01

    Tangential flow microfiltration (MF) is a cost-effective and robust bioprocess separation technique, but successful full scale implementation is hindered by the empirical, trial-and-error nature of scale-up. We present an integrated approach leveraging at-line process analytical technology (PAT) and mass balance based modeling to de-risk MF scale-up. Chromatography-based PAT was employed to improve the consistency of an MF step that had been a bottleneck in the process used to manufacture a therapeutic protein. A 10-min reverse phase ultra high performance liquid chromatography (RP-UPLC) assay was developed to provide at-line monitoring of protein concentration. The method was successfully validated and method performance was comparable to previously validated methods. The PAT tool revealed areas of divergence from a mass balance-based model, highlighting specific opportunities for process improvement. Adjustment of appropriate process controls led to improved operability and significantly increased yield, providing a successful example of PAT deployment in the downstream purification of a therapeutic protein. The general approach presented here should be broadly applicable to reduce risk during scale-up of filtration processes and should be suitable for feed-forward and feed-back process control. © 2015 American Institute of Chemical Engineers.

  13. Good Practices for Learning to Recognize Actions Using FV and VLAD.

    PubMed

    Wu, Jianxin; Zhang, Yu; Lin, Weiyao

    2016-12-01

    High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.

  14. Andromeda IV: A new local volume very metal-poor galaxy

    NASA Astrophysics Data System (ADS)

    Pustilnik, S. A.; Tepliakova, A. L.; Kniazev, A. Y.; Burenkov, A. N.

    2008-06-01

    And IV is a low surface brightness (LSB) dwarf galaxy at a distance of 6.1 Mpc, projecting close to M 31. In this paper the results of spectroscopy of the And IV two brightest HII regions with the SAO 6-m telescope (BTA) are presented. In spectra of both of them the faint line [OIII] λ4363 Å was detected and this allowed us to determine their O/H by the classical Te method. Their values for 12+log(O/H) are equal to 7.49±0.06 and 7.55±0.23, respectively. The comparison of the direct O/H calculations with the two most reliable semi-empirical and empirical methods shows the good consistency between these methods. For And IV absolute blue magnitude, MB = -12.6, our value for O/H corresponds to the ‘standard’ relation between O/H and LB for dwarf irregular galaxies (DIGs). And IV appears to be a new representative of the extremely metal-deficient gas-rich galaxies in the Local Volume. The very large range of M(HI) for LSB galaxies with close metallicities and luminosities indicates that simple models of LSBG chemical evolution are too limited to predict such striking diversity.

  15. High-resolution gamma ray attenuation density measurements on mining exploration drill cores, including cut cores

    NASA Astrophysics Data System (ADS)

    Ross, P.-S.; Bourke, A.

    2017-01-01

    Physical property measurements are increasingly important in mining exploration. For density determinations on rocks, one method applicable on exploration drill cores relies on gamma ray attenuation. This non-destructive method is ideal because each measurement takes only 10 s, making it suitable for high-resolution logging. However calibration has been problematic. In this paper we present new empirical, site-specific correction equations for whole NQ and BQ cores. The corrections force back the gamma densities to the "true" values established by the immersion method. For the NQ core caliber, the density range extends to high values (massive pyrite, 5 g/cm3) and the correction is thought to be very robust. We also present additional empirical correction factors for cut cores which take into account the missing material. These "cut core correction factors", which are not site-specific, were established by making gamma density measurements on truncated aluminum cylinders of various residual thicknesses. Finally we show two examples of application for the Abitibi Greenstone Belt in Canada. The gamma ray attenuation measurement system is part of a multi-sensor core logger which also determines magnetic susceptibility, geochemistry and mineralogy on rock cores, and performs line-scan imaging.

  16. The Effects of Social Context on Youth Outcomes: Studying Neighborhoods and Schools Simultaneously

    ERIC Educational Resources Information Center

    Brazil, Noli

    2016-01-01

    Background/Context: A long line of research has empirically examined the effects of social context on child and adolescent well-being. Scholars have paid particular attention to two specific levels of social context: the school and neighborhood. Although youths occupy these social contexts simultaneously, empirical research on schools and…

  17. The Empirical Link between Program Development and the Performance Needs of Professionals and Executives.

    ERIC Educational Resources Information Center

    Baehr, Melany E.

    1984-01-01

    An empirical procedure to determine areas of required development for personnel in three management hierarchies (line, professional, and sales) involves a job analysis of nine key positions in these hierarchies, determination of learning needs for each job function, and development of program curricula for each need. (SK)

  18. Continuity of states between the cholesteric → line hexatic transition and the condensation transition in DNA solutions

    DOE PAGES

    Yasar, Selcuk; Podgornik, Rudolf; Valle-Orero, Jessica; ...

    2014-11-05

    A new method of finely temperature-tuning osmotic pressure allows one to identify the cholesteric → line hexatic transition of oriented or unoriented long-fragment DNA bundles in monovalent salt solutions as first order, with a small but finite volume discontinuity. This transition is similar to the osmotic pressure-induced expanded → condensed DNA transition in polyvalent salt solutions at small enough polyvalent salt concentrations. Therefore there exists a continuity of states between the two. This finding with the corresponding empirical equation of state, effectively relates the phase diagram of DNA solutions for monovalent salts to that for polyvalent salts and sheds somemore » light on the complicated interactions between DNA molecules at high densities.« less

  19. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  20. On fitting the Pareto Levy distribution to stock market index data: Selecting a suitable cutoff value

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.

    2005-08-01

    The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).

  1. Analysis of PH3 spectra in the Octad range 2733-3660 cm-1

    NASA Astrophysics Data System (ADS)

    Nikitin, A. V.; Ivanova, Y. A.; Rey, M.; Tashkun, S. A.; Toon, G. C.; Sung, K.; Tyuterev, Vl. G.

    2017-12-01

    Improved analysis of positions and intensities of phosphine spectral lines in the Octad region 2733-3660 cm-1 is reported. Some 5768 positions and 1752 intensities were modelled with RMS deviations of 0.00185 cm-1 and 10.9%, respectively. Based on an ab initio potential energy surface, the full Hamiltonian of phosphine nuclear motion was reduced to an effective Hamiltonian using high-order Contact Transformations method adapted to polyads of symmetric top AB3-type molecules with a subsequent empirical optimization of parameters. More than 2000 new ro-vibrational lines were assigned that include transitions for all 13 vibrational Octad sublevels. This new fitting of measured positions and intensities considerably improved the accuracy of line parameters in the calculated database. A comparison of our results with experimental spectra of PNNL showed that the new set of line parameters from this work permits better simulation of observed cross-sections than the HITRAN2012 linelist. In the 2733-3660 cm-1 range, our integrated intensities show a good consistency with recent ab initio variational calculations.

  2. Comment on "Classification of aerosol properties derived from AERONET direct sun data" by Gobbi et al. (2007)

    NASA Astrophysics Data System (ADS)

    O'Neill, N. T.

    2010-10-01

    It is pointed out that the graphical, aerosol classification method of Gobbi et al. (2007) can be interpreted as a manifestation of fundamental analytical relations whose existance depends on the simple assumption that the optical effects of aerosols are essentially bimodal in nature. The families of contour lines in their "Ada" curvature space are essentially empirical and discretized illustrations of analytical parabolic forms in (α, α') space (the space formed by the continuously differentiable Angstrom exponent and its spectral derivative).

  3. New method for determining temperature and emission measure during solar flares from light curves of soft X-ray line fluxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornmann, P.L.

    I describe a new property of soft X-ray line fluxes observed during the decay phase of solar flares and a technique for using this property to determine the plasma temperature and emission measure as functions of time. The soft X-ray line fluxes analyzed in this paper were observed during the decay phase of the 1980 November 5 flare by the X-Ray Polychromator (XRP) instrument on board the Solar Maximum Mission (SMM). The resonance, intercombination, and forbidden lines of Ne IX, Mg XI, Si XIII, S XV, Ca XIX, and Fe XXV, as well as the Lyman-..cap alpha.. line of Omore » VIII and the resonance lines of Fe XIX, were observed. The rates at which the observed line fluxes decayed were not constant. For all but the highest temperature lines observed, the rate changed abruptly, causing the fluxes to fall at a more rapid rate later in the flare decay. These changes occurred at earlier times for lines formed at higher temperatures. This behavior is proposed to be due to the decreasing temperature of the flare plasma tracking the rise and subsequent fall of each line emissivity function. This explanation is used to empirically model the observed light curves and to estimate the temperature and the change in emission measure of the plasma as a function of time during the decay phase. Estimates are made of various plasma parameters based on the model results.« less

  4. Response functions for neutron skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-02-01

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less

  5. Pre- and Post-equinox ROSINA production rates calculated using a realistic empirical coma model derived from AMPS-DSMC simulations of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu

    2016-04-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.

  6. Managing Human Resource Capabilities for Sustainable Competitive Advantage: An Empirical Analysis from Indian Global Organisations

    ERIC Educational Resources Information Center

    Khandekar, Aradhana; Sharma, Anuradha

    2005-01-01

    Purpose: The purpose of this article is to examine the role of human resource capability (HRC) in organisational performance and sustainable competitive advantage (SCA) in Indian global organisations. Design/Methodology/Approach: To carry out the present study, an empirical research on a random sample of 300 line or human resource managers from…

  7. Chromospheric and Transition region He lines during a flare

    NASA Astrophysics Data System (ADS)

    Falchi, A.; Mauas, P. J. D.; Andretta, V.; Teriaca, L.; Cauzzi, G.; Falciani, R.; Smaldone, L. A.

    An observing campaign (SOHO JOP 139), coordinated between ground based and SOHO instruments, has been planned to obtain simultaneous spectroheliograms of the same area in several spectral lines. The chromospheric lines Ca II K, Hα and Na I D as well as He I 10830, 5876, 584 and 304 Ålines have been observed. These observations allow us to build semi-empirical models of the atmosphere before and during a small flare. With these models, constructed to match the observed line profiles, we can test the He abundance value.

  8. A Physically Motivated and Empirically Calibrated Method to Measure the Effective Temperature, Metallicity, and Ti Abundance of M Dwarfs

    NASA Astrophysics Data System (ADS)

    Veyette, Mark J.; Muirhead, Philip S.; Mann, Andrew W.; Brewer, John M.; Allard, France; Homeier, Derek

    2017-12-01

    The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications, including studying the chemical evolution of the Galaxy and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres hinders similar analyses of M dwarf stars. Empirically calibrated methods to measure M dwarf metallicity from moderate-resolution spectra are currently limited to measuring overall metallicity and rely on astrophysical abundance correlations in stellar populations. We present a new, empirical calibration of synthetic M dwarf spectra that can be used to infer effective temperature, Fe abundance, and Ti abundance. We obtained high-resolution (R ˜ 25,000), Y-band (˜1 μm) spectra of 29 M dwarfs with NIRSPEC on Keck II. Using the PHOENIX stellar atmosphere modeling code (version 15.5), we generated a grid of synthetic spectra covering a range of temperatures, metallicities, and alpha-enhancements. From our observed and synthetic spectra, we measured the equivalent widths of multiple Fe I and Ti I lines and a temperature-sensitive index based on the FeH band head. We used abundances measured from widely separated solar-type companions to empirically calibrate transformations to the observed indices and equivalent widths that force agreement with the models. Our calibration achieves precisions in T eff, [Fe/H], and [Ti/Fe] of 60 K, 0.1 dex, and 0.05 dex, respectively, and is calibrated for 3200 K < T eff < 4100 K, -0.7 < [Fe/H] < +0.3, and -0.05 < [Ti/Fe] < +0.3. This work is a step toward detailed chemical analysis of M dwarfs at a precision similar to what has been achieved for FGK stars.

  9. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  10. Physical Conditions of a Lensed Star-Forming Galaxy at Z=1.7

    NASA Technical Reports Server (NTRS)

    Rigby, Jane; Wuyts, E.; Gladders, M.; Sharon, K.; Becker, G. D.

    2010-01-01

    We report rest-frame optical Keck/NIRSPEC spectroscopy of the brightest lensed galaxy yet discovered, RCSGA 032727-132609 at z=1.7037. From precise measurements of the nebular lines, we infer a number of physical properties: redshift, extinction, star formation rate, ionization parameter, electron density, electron temperature, oxygen abundance, and N/O, Ne/O, and Ar/O abundance ratios. The limit on [O III] 4363 A tightly constrains the oxygen abundance via the "direct" or Tc method, for the first time in all metallicity galaxy at z approx.2. We compare this result to several standard "bright-line" O abundance diagnostics, thereby testing these empirically calibrated diagnostics in situ. Finally, we explore the positions of lensed and unlensed galaxies in standard diagnostic diagrams, and explore the diversity of ionization conditions and mass-metallicity ratios at z=2.

  11. The Physical Conditions of a Lensed Star-Forming Galaxy at Z=1.7

    NASA Technical Reports Server (NTRS)

    Rigby, Jane; Wuyts, E.; Gladders, M.; Sharon, K.; Becker, G.

    2011-01-01

    We report rest-frame optical Keck/NIRSPEC spectroscopy of the brightest lensed galaxy yet discovered, RCSGA 032727-132609 at z=1.7037. From precise measurements of the nebular lines, we infer a number of physical properties: redshift ' extinction, star formation rate ' ionization parameter, electron density, electron temperature, oxygen abundance, and N/O, Ne/O, and Ar/O abundance ratios, The limit on [O III] 4363 A tightly constrains the oxygen abundance via the "direct" or Te method, for the first time in an average-metallicity galaxy at z approx.2. We compare this result to several standard "bright-line" O abundance diagnostics, thereby testing these empirically-calibrated diagnostics in situ. Finally, we explore the positions of lensed and unlensed galaxies in standard diagnostic diagrams, to explore the diversity of ionization conditions and mass-metallicity ratios at z=2.

  12. Analysis of Ion Composition Estimation Accuracy for Incoherent Scatter Radars

    NASA Astrophysics Data System (ADS)

    Martínez Ledesma, M.; Diaz, M. A.

    2017-12-01

    The Incoherent Scatter Radar (ISR) is one of the most powerful sounding methods developed to estimate the Ionosphere. This radar system determines the plasma parameters by sending powerful electromagnetic pulses to the Ionosphere and analyzing the received backscatter. This analysis provides information about parameters such as electron and ion temperatures, electron densities, ion composition, and ion drift velocities. Nevertheless in some cases the ISR analysis has ambiguities in the determination of the plasma characteristics. It is of particular relevance the ion composition and temperature ambiguity obtained between the F1 and the lower F2 layers. In this case very similar signals are obtained with different mixtures of molecular ions (NO2+ and O2+) and atomic oxygen ions (O+), and consequently it is not possible to completely discriminate between them. The most common solution to solve this problem is the use of empirical or theoretical models of the ionosphere in the fitting of ambiguous data. More recent works take use of parameters estimated from the Plasma Line band of the radar to reduce the number of parameters to determine. In this work we propose to determine the error estimation of the ion composition ambiguity when using Plasma Line electron density measurements. The sensibility of the ion composition estimation has been also calculated depending on the accuracy of the ionospheric model, showing that the correct estimation is highly dependent on the capacity of the model to approximate the real values. Monte Carlo simulations of data fitting at different signal to noise (SNR) ratios have been done to obtain valid and invalid estimation probability curves. This analysis provides a method to determine the probability of erroneous estimation for different signal fluctuations. Also it can be used as an empirical method to compare the efficiency of the different algorithms and methods on when solving the ion composition ambiguity.

  13. ExoMol line list - XXI. Nitric Oxide (NO)

    NASA Astrophysics Data System (ADS)

    Wong, Andy; Yurchenko, Sergei N.; Bernath, Peter; Müller, Holger S. P.; McConkey, Stephanie; Tennyson, Jonathan

    2017-09-01

    Line lists for the X 2Π electronic ground state for the parent isotopologue of nitric oxide (14N16O) and five other major isotopologues (14N17O, 14N18O, 15N16O, 15N17O and 15N18O) are presented. The line lists are constructed using empirical energy levels (and line positions) and high-level ab initio intensities. The energy levels were obtained using a combination of two approaches, from an effective Hamiltonian and from solving the rovibronic Schrödinger equation variationally. The effective Hamiltonian model was obtained through a fit to the experimental line positions of NO available in the literature for all six isotopologues using the programs spfit and spcat. The variational model was built through a least squares fit of the ab initio potential and spin-orbit curves to the experimentally derived energies and experimental line positions of the main isotopologue only using the duo program. The ab initio potential energy, spin-orbit and dipole moment curves (PEC, SOC and DMC) are computed using high-level ab initio methods and the marvel method is used to obtain energies of NO from experimental transition frequencies. The line lists are constructed for each isotopologue based on the use of the most accurate energy levels and the ab initio DMC. Each line list covers a wavenumber range from 0 to 40 000 cm-1 with approximately 22 000 rovibronic states and 2.3-2.6 million transitions extending to Jmax = 184.5 and vmax = 51. Partition functions are also calculated up to a temperature of 5000 K. The calculated absorption line intensities at 296 K using these line lists show excellent agreement with those included in the HITRAN and HITEMP data bases. The computed NO line lists are the most comprehensive to date, covering a wider wavenumber and temperature range compared to both the HITRAN and HITEMP data bases. These line lists are also more accurate than those used in HITEMP. The full line lists are available from the CDS http://cdsarc.u-strasbg.fr and ExoMol www.exomol.com data bases; data will also be available from CDMS http://www.cdms.de.

  14. Empirical projection-based basis-component decomposition method

    NASA Astrophysics Data System (ADS)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  15. An empirical spectroscopic database for acetylene in the regions of 5850-6341 cm-1 and 7000-9415 cm-1

    NASA Astrophysics Data System (ADS)

    Lyulin, O. M.; Campargue, A.

    2017-12-01

    Six studies have been recently devoted to a systematic analysis of the high-resolution near infrared absorption spectrum of acetylene recorded by Cavity Ring Down spectroscopy (CRDS) in Grenoble and by Fourier-transform spectroscopy (FTS) in Brussels and Hefei. On the basis of these works, in the present contribution, we construct an empirical database for acetylene in the 5850-9415 cm-1 region excluding the 6341-7000 cm-1 interval corresponding to the very strong ν1+ν3 manifold. Our database gathers and extends information included in our CRDS and FTS studies. In particular, the intensities of about 1700 lines measured by CRDS in the 7244-7920 cm-1 region are reported for the first time together with those of several bands of 12C13CH2 present in natural isotopic abundance in the acetylene sample. The Herman-Wallis coefficients of most of the bands are derived from a fit of the measured intensity values. A recommended line list is provided with positions calculated using empirical spectroscopic parameters of the lower and upper energy vibrational levels and intensities calculated using the derived Herman-Wallis coefficients. This approach allows completing the experimental list by adding missing lines and improving poorly determined positions and intensities. As a result the constructed line list includes a total of 11113 transitions belonging to 150 bands of 12C2H2 and 29 bands of 12C13CH2. For comparison the HITRAN database in the same region includes 869 transitions of 14 bands, all belonging to 12C2H2. Our weakest lines have an intensity on the order of 10-29 cm/molecule, about three orders of magnitude smaller than the HITRAN intensity cut off. Line profile parameters are added to the line list which is provided in HITRAN format. The comparison of the acetylene database to the HITRAN2012 line list or to results obtained using the global effective operator approach is discussed in terms of completeness and accuracy.

  16. Infrared Abundances and the Chemical Enrichment of the Universe

    NASA Astrophysics Data System (ADS)

    Smith, J. D.

    Elements heavier than helium make up only a small fraction of the mass of the present day Universe, yet they heavily impact how galaxies and stars form and evolve. The chemical enrichment history of the Universe therefore forms an essential part of any complete understanding of galaxy evolution, and with the advent of incredibly sensitive IR/sub-mm/radio facilities, we are poised to begin unraveling it. Nonetheless, significant, decades-old problems plague even the most data-rich local methods of measuring gas phase metal abundance, with large (up to 10x) disagreements stemming principally from unknown and unseen temperature structure in ionized gas. The farinfared fine structure lines of oxygen offer a path out of this deadlock. Oxygen is the most important coolant of ionized gas, and the dominant metal abundance indicator. Its ground state fine structure lines, in particular [OIII] 88¼m, arise from such low-lying energy levels that they are insensitive to temperature. And unlike the faint "auroral" lines used by the gold-standard direct abundance method, they are bright, and readily observable at all metallicities. Indeed this crucial line has already been observed with ALMA in a number of galaxies directly in the era of reionization at z=7-9. Herschel has mapped and archived more than 150 nearby (d<25Mpc) galaxies on scales of 1 kiloparsec and below in the important [OIII] 88¼m line. We propose a comprehensive program to develop the far-infrared fine structure lines of oxygen into direct, empirical gas phase metal abundance measures. We will validate directly against the largest, deepest survey of direct spectroscopic optical metal abundances ever undertaken - the LBT/MODS program CHAOS. We will leverage spatially matched nebular emission lines ([NeII], [NeIII], [SIII], [SIV]) from Spitzer/IRS for ionization balance. We will employ our extensive optical IFU data (PPAK, MUSE, and VENGA) for strong line abundance comparisons, and to bridge the physical scales between Herschel/Spitzer and CHAOS. In addition, we will combine and validate decomposed radio free-free continuum as an extinction-free substitute for recombination emission for hydrogen normalization. This is the first time this unique combination of ionized gas tracers from optical through radio - spanning a factor of 200,000 in wavelength - will have been brought together on the same physical scales in a large and widely varied sample of nearby galaxies. The suite of hybrid abundance indicators we produce will enable empirical, intercomparable, temperature-insensitive, extinction-free measurements of gas phase metal abundance both locally and at high-redshift, even in dusty systems like ULIRGs where inferring abundance has traditionally been impossible. The results will impact, if not resolve, a decades long debate on the true oxygen abundance scale for galaxies. As natural byproducts of this study, we will also (1) construct a system of best practices for determining abundances of high redshift galaxies from fine structure emission lines and related measurements, and (2) produce and deliver a large line atlas of many thousands of independent spatially resolved regions within galaxies in their principal optical, mid- and far-infrared emission lines, enabling many additional studies.

  17. Empirical mass-loss rates for 25 O and early B stars, derived from Copernicus observations

    NASA Technical Reports Server (NTRS)

    Gathier, R.; Lamers, H. J. G. L. M.; Snow, T. P.

    1981-01-01

    Ultraviolet line profiles are fitted with theoretical line profiles in the cases of 25 stars covering a spectral type range from O4 to B1, including all luminosity classes. Ion column densities are compared for the determination of wind ionization, and it is found that the O VI/N V ratio is dependent on the mean density of the wind and not on effective temperature value, while the Si IV/N V ratio is temperature-dependent. The column densities are used to derive a mass-loss rate parameter that is empirically correlated against the mass-loss rate by means of standard stars with well-determined rates from IR or radio data. The empirical mass-loss rates obtained are compared with those derived by others and found to vary by as much as a factor of 10, which is shown to be due to uncertainties or errors in the ionization fractions of models used for wind ionization balance prediction.

  18. Artifact removal from EEG data with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.

  19. Biases in Metallicity Measurements from Global Galaxy Spectra: The Effects of Flux Weighting and Diffuse Ionized Gas Contamination

    NASA Astrophysics Data System (ADS)

    Sanders, Ryan L.; Shapley, Alice E.; Zhang, Kai; Yan, Renbin

    2017-12-01

    Galaxy metallicity scaling relations provide a powerful tool for understanding galaxy evolution, but obtaining unbiased global galaxy gas-phase oxygen abundances requires proper treatment of the various line-emitting sources within spectroscopic apertures. We present a model framework that treats galaxies as ensembles of H II and diffuse ionized gas (DIG) regions of varying metallicities. These models are based upon empirical relations between line ratios and electron temperature for H II regions, and DIG strong-line ratio relations from SDSS-IV MaNGA IFU data. Flux-weighting effects and DIG contamination can significantly affect properties inferred from global galaxy spectra, biasing metallicity estimates by more than 0.3 dex in some cases. We use observationally motivated inputs to construct a model matched to typical local star-forming galaxies, and quantify the biases in strong-line ratios, electron temperatures, and direct-method metallicities as inferred from global galaxy spectra relative to the median values of the H II region distributions in each galaxy. We also provide a generalized set of models that can be applied to individual galaxies or galaxy samples in atypical regions of parameter space. We use these models to correct for the effects of flux-weighting and DIG contamination in the local direct-method mass-metallicity and fundamental metallicity relations, and in the mass-metallicity relation based on strong-line metallicities. Future photoionization models of galaxy line emission need to include DIG emission and represent galaxies as ensembles of emitting regions with varying metallicity, instead of as single H II regions with effective properties, in order to obtain unbiased estimates of key underlying physical properties.

  20. Computer simulation of a geomagnetic substorm

    NASA Technical Reports Server (NTRS)

    Lyon, J. G.; Brecht, S. H.; Huba, J. D.; Fedder, J. A.; Palmadesso, P. J.

    1981-01-01

    A global two-dimensional simulation of a substormlike process occurring in earth's magnetosphere is presented. The results are consistent with an empirical substorm model - the neutral-line model. Specifically, the introduction of a southward interplanetary magnetic field forms an open magnetosphere. Subsequently, a substorm neutral line forms at about 15 earth radii or closer in the magnetotail, and plasma sheet thinning and plasma acceleration occur. Eventually the substorm neutral line moves tailward toward its presubstorm position.

  1. EMPIRICAL DETERMINATION OF EINSTEIN A-COEFFICIENT RATIOS OF BRIGHT [Fe II] LINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giannini, T.; Antoniucci, S.; Nisini, B.

    The Einstein spontaneous rates (A-coefficients) of Fe{sup +} lines have been computed by several authors with results that differ from each other by up to 40%. Consequently, models for line emissivities suffer from uncertainties that in turn affect the determination of the physical conditions at the base of line excitation. We provide an empirical determination of the A-coefficient ratios of bright [Fe II] lines that would represent both a valid benchmark for theoretical computations and a reference for the physical interpretation of the observed lines. With the ESO-Very Large Telescope X-shooter instrument between 3000 Å and 24700 Å, we obtainedmore » a spectrum of the bright Herbig-Haro object HH 1. We detect around 100 [Fe II] lines, some of which with a signal-to-noise ratios ≥100. Among these latter lines, we selected those emitted by the same level, whose dereddened intensity ratios are direct functions of the Einstein A-coefficient ratios. From the same X-shooter spectrum, we got an accurate estimate of the extinction toward HH 1 through intensity ratios of atomic species, H I  recombination lines and H{sub 2} ro-vibrational transitions. We provide seven reliable A-coefficient ratios between bright [Fe II] lines, which are compared with the literature determinations. In particular, the A-coefficient ratios involving the brightest near-infrared lines (λ12570/λ16440 and λ13209/λ16440) are in better agreement with the predictions by the Quinet et al. relativistic Hartree-Fock model. However, none of the theoretical models predict A-coefficient ratios in agreement with all of our determinations. We also show that literature data of near-infrared intensity ratios better agree with our determinations than with theoretical expectations.« less

  2. Semi-empirical calculations of line-shape parameters and their temperature dependences for the ν6 band of CH3D perturbed by N2

    NASA Astrophysics Data System (ADS)

    Dudaryonok, A. S.; Lavrentieva, N. N.; Buldyreva, J.

    2018-06-01

    (J, K)-line broadening and shift coefficients with their temperature-dependence characteristics are computed for the perpendicular (ΔK = ±1) ν6 band of the 12CH3D-N2 system. The computations are based on a semi-empirical approach which consists in the use of analytical Anderson-type expressions multiplied by a few-parameter correction factor to account for various deviations from Anderson's theory approximations. A mathematically convenient form of the correction factor is chosen on the basis of experimental rotational dependencies of line widths, and its parameters are fitted on some experimental line widths at 296 K. To get the unknown CH3D polarizability in the excited vibrational state v6 for line-shift calculations, a parametric vibration-state-dependent expression is suggested, with two parameters adjusted on some room-temperature experimental values of line shifts. Having been validated by comparison with available in the literature experimental values for various sub-branches of the band, this approach is used to generate massive data of line-shape parameters for extended ranges of rotational quantum numbers (J up to 70 and K up to 20) typically requested for spectroscopic databases. To obtain the temperature-dependence characteristics of line widths and line shifts, computations are done for various temperatures in the range 200-400 K recommended for HITRAN and least-squares fit procedures are applied. For the case of line widths strong sub-branch dependence with increasing K is observed in the R- and P-branches; for the line shifts such dependence is stated for the Q-branch.

  3. An open-terrain line source model coupled with street-canyon effects to forecast carbon monoxide at traffic roundabout.

    PubMed

    Pandian, Suresh; Gokhale, Sharad; Ghoshal, Aloke Kumar

    2011-02-15

    A double-lane four-arm roundabout, where traffic movement is continuous in opposite directions and at different speeds, produces a zone responsible for recirculation of emissions within a road section creating canyon-type effect. In this zone, an effect of thermally induced turbulence together with vehicle wake dominates over wind driven turbulence causing pollutant emission to flow within, resulting into more or less equal amount of pollutants upwind and downwind particularly during low winds. Beyond this region, however, the effect of winds becomes stronger, causing downwind movement of pollutants. Pollutant dispersion caused by such phenomenon cannot be described accurately by open-terrain line source model alone. This is demonstrated by estimating one-minute average carbon monoxide concentration by coupling an open-terrain line source model with a street canyon model which captures the combine effect to describe the dispersion at non-signalized roundabout. The results of the modeling matched well with the measurements compared with the line source model alone and the prediction error reduced by about 50%. The study further demonstrated this with traffic emissions calculated by field and semi-empirical methods. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. An Interpolation Method for Obtaining Thermodynamic Properties Near Saturated Liquid and Saturated Vapor Lines

    NASA Technical Reports Server (NTRS)

    Nguyen, Huy H.; Martin, Michael A.

    2003-01-01

    The availability and proper utilization of fluid properties is of fundamental importance in the process of mathematical modeling of propulsion systems. Real fluid properties provide the bridge between the realm of pure analytiis and empirical reality. The two most common approaches used to formulate thermodynamic properties of pure substances are fundamental (or characteristic) equations of state (Helmholtz and Gibbs functions) and a piecemeal approach that is described, for example, in Adebiyi and Russell (1992). This paper neither presents a different method to formulate thermodynamic properties of pure substances nor validates the aforementioned approaches. Rather its purpose is to present a method to be used to facilitate the accurate interpretation of fluid thermodynamic property data generated by existing property packages. There are two parts to this paper. The first part of the paper shows how efficient and usable property tables were generated, with the minimum number of data points, using an aerospace industry standard property package (based on fundamental equations of state approach). The second part describes an innovative interpolation technique that has been developed to properly obtain thermodynamic properties near the saturated liquid and saturated vapor lines.

  5. Application Of Empirical Phase Diagrams For Multidimensional Data Visualization Of High Throughput Microbatch Crystallization Experiments.

    PubMed

    Klijn, Marieke E; Hubbuch, Jürgen

    2018-04-27

    Protein phase diagrams are a tool to investigate cause and consequence of solution conditions on protein phase behavior. The effects are scored according to aggregation morphologies such as crystals or amorphous precipitates. Solution conditions affect morphological features, such as crystal size, as well as kinetic features, such as crystal growth time. Common used data visualization techniques include individual line graphs or symbols-based phase diagrams. These techniques have limitations in terms of handling large datasets, comprehensiveness or completeness. To eliminate these limitations, morphological and kinetic features obtained from crystallization images generated with high throughput microbatch experiments have been visualized with radar charts in combination with the empirical phase diagram (EPD) method. Morphological features (crystal size, shape, and number, as well as precipitate size) and kinetic features (crystal and precipitate onset and growth time) are extracted for 768 solutions with varying chicken egg white lysozyme concentration, salt type, ionic strength and pH. Image-based aggregation morphology and kinetic features were compiled into a single and easily interpretable figure, thereby showing that the EPD method can support high throughput crystallization experiments in its data amount as well as its data complexity. Copyright © 2018. Published by Elsevier Inc.

  6. Use of CFD Analyses to Predict Disk Friction Loss of Centrifugal Compressor Impellers

    NASA Astrophysics Data System (ADS)

    Cho, Leesang; Lee, Seawook; Cho, Jinsoo

    To improve the total efficiency of centrifugal compressors, it is necessary to reduce disk friction loss, which is expressed as the power loss. In this study, to reduce the disk friction loss due to the effect of axial clearance and surface roughness is analyzed and methods to reduce disk friction loss are proposed. The rotating reference frame technique using a commercial CFD tool (FLUENT) is used for steady-state analysis of the centrifugal compressor. Numerical results of the CFD analysis are compared with theoretical results using established experimental empirical equations. The disk friction loss of the impeller is decreased in line with increments in axial clearance until the axial clearance between the impeller disk and the casing is smaller than the boundary layer thickness. In addition, the disk friction loss of the impeller is increased in line with the increments in surface roughness in a similar pattern as that of existing experimental empirical formulas. The disk friction loss of the impeller is more affected by the surface roughness than the change of the axial clearance. To minimize disk friction loss on the centrifugal compressor impeller, the axial clearance and the theoretical boundary layer thickness should be designed to be the same. The design of the impeller requires careful consideration in order to optimize axial clearance and minimize surface roughness.

  7. The Problem of Empirical Redundancy of Constructs in Organizational Research: An Empirical Investigation

    ERIC Educational Resources Information Center

    Le, Huy; Schmidt, Frank L.; Harter, James K.; Lauver, Kristy J.

    2010-01-01

    Construct empirical redundancy may be a major problem in organizational research today. In this paper, we explain and empirically illustrate a method for investigating this potential problem. We applied the method to examine the empirical redundancy of job satisfaction (JS) and organizational commitment (OC), two well-established organizational…

  8. The IACOB project . III. New observational clues to understand macroturbulent broadening in massive O- and B-type stars

    NASA Astrophysics Data System (ADS)

    Simón-Díaz, S.; Godart, M.; Castro, N.; Herrero, A.; Aerts, C.; Puls, J.; Telting, J.; Grassitelli, L.

    2017-01-01

    Context. The term macroturbulent broadening is commonly used to refer to a certain type of non-rotational broadening affecting the spectral line profiles of O- and B-type stars. It has been proposed to be a spectroscopic signature of the presence of stellar oscillations; however, we still lack a definitive confirmation of this hypothesis. Aims: We aim to provide new empirical clues about macroturbulent spectral line broadening in O- and B-type stars to evaluate its physical origin. Methods: We used high-resolution spectra of 430 stars with spectral types in the range O4 - B9 (all luminosity classes) compiled in the framework of the IACOB project. We characterized the line broadening of adequate diagnostic metal lines using a combined Fourier transform and goodness-of-fit technique. We performed a quantitative spectroscopic analysis of the whole sample using automatic tools coupled with a huge grid of fastwind models to determine their effective temperatures and gravities. We also incorporated quantitative information about line asymmetries into our observational description of the characteristics of the line profiles, and performed a comparison of the shape and type of line-profile variability found in a small sample of O stars and B supergiants with still undefined pulsational properties and B main-sequence stars with variable line profiles owing to a well-identified type of stellar oscillations or to the presence of spots in the stellar surface. Results: We present a homogeneous and statistically significant overview of the (single snapshot) line-broadening properties of stars in the whole O and B star domain. We find empirical evidence of the existence of various types of non-rotational broadening agents acting in the realm of massive stars. Even though all these additional sources of line-broadening could be quoted and quantified as a macroturbulent broadening from a practical point of view, their physical origin can be different. Contrarily to the early- to late-B dwarfs and giants, which present a mixture of cases in terms of line-profile shape and variability, the whole O-type and B supergiant domain (or, roughly speaking, stars with MZAMS ≳ 15 M⊙) is fully dominated by stars with a remarkable non-rotational broadening component and very similar profiles (including type of variability). We provide some examples illustrating how this observational dataset can be used to evaluate scenarios aimed at explaining the existence of sources of non-rotational broadening in massive stars. Full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/597/A22

  9. On the Mixing of Single and Opposed Rows of Jets With a Confined Crossflow

    NASA Technical Reports Server (NTRS)

    Holdeman, James D.; Clisset, James R.; Moder, Jeffrey P.; Lear, William E.

    2006-01-01

    The primary objectives of this study were 1) to demonstrate that contour plots could be made using the data interface in the NASA GRC jet-in-crossflow (JIC) spreadsheet, and 2) to investigate the suitability of using superposition for the case of opposed rows of jets with their centerlines in-line. The current report is similar to NASA/TM-2005-213137 but the "basic" effects of a confined JIC that are shown in profile plots there are shown as contour plots in this report, and profile plots for opposed rows of aligned jets are presented here using both symmetry and superposition models. Although superposition was found to be suitable for most cases of opposed rows of jets with jet centerlines in-line, the calculation procedure in the JIC spreadsheet was not changed and it still uses the symmetry method for this case, as did all previous publications of the NASA empirical model.

  10. Practical implications of empirically studying moral decision-making.

    PubMed

    Heinzelmann, Nora; Ugazio, Giuseppe; Tobler, Philippe N

    2012-01-01

    This paper considers the practical question of why people do not behave in the way they ought to behave. This question is a practical one, reaching both into the normative and descriptive domains of morality. That is, it concerns moral norms as well as empirical facts. We argue that two main problems usually keep us form acting and judging in a morally decent way: firstly, we make mistakes in moral reasoning. Secondly, even when we know how to act and judge, we still fail to meet the requirements due to personal weaknesses. This discussion naturally leads us to another question: can we narrow the gap between what people are morally required to do and what they actually do? We discuss findings from neuroscience, economics, and psychology, considering how we might bring our moral behavior better in line with moral theory. Potentially fruitful means include nudging, training, pharmacological enhancement, and brain stimulation. We conclude by raising the question of whether such methods could and should be implemented.

  11. The Empirical Foundations of Teleradiology and Related Applications: A Review of the Evidence

    PubMed Central

    Krupinski, Elizabeth A.; Thrall, James H.; Bashshur, Noura

    2016-01-01

    Abstract Introduction: Radiology was founded on a technological discovery by Wilhelm Roentgen in 1895. Teleradiology also had its roots in technology dating back to 1947 with the successful transmission of radiographic images through telephone lines. Diagnostic radiology has become the eye of medicine in terms of diagnosing and treating injury and disease. This article documents the empirical foundations of teleradiology. Methods: A selective review of the credible literature during the past decade (2005–2015) was conducted, using robust research design and adequate sample size as criteria for inclusion. Findings: The evidence regarding feasibility of teleradiology and related information technology applications has been well documented for several decades. The majority of studies focused on intermediate outcomes, as indicated by comparability between teleradiology and conventional radiology. A consistent trend of concordance between the two modalities was observed in terms of diagnostic accuracy and reliability. Additional benefits include reductions in patient transfer, rehospitalization, and length of stay. PMID:27585301

  12. Practical Implications of Empirically Studying Moral Decision-Making

    PubMed Central

    Heinzelmann, Nora; Ugazio, Giuseppe; Tobler, Philippe N.

    2012-01-01

    This paper considers the practical question of why people do not behave in the way they ought to behave. This question is a practical one, reaching both into the normative and descriptive domains of morality. That is, it concerns moral norms as well as empirical facts. We argue that two main problems usually keep us form acting and judging in a morally decent way: firstly, we make mistakes in moral reasoning. Secondly, even when we know how to act and judge, we still fail to meet the requirements due to personal weaknesses. This discussion naturally leads us to another question: can we narrow the gap between what people are morally required to do and what they actually do? We discuss findings from neuroscience, economics, and psychology, considering how we might bring our moral behavior better in line with moral theory. Potentially fruitful means include nudging, training, pharmacological enhancement, and brain stimulation. We conclude by raising the question of whether such methods could and should be implemented. PMID:22783157

  13. Method for evaluation of human induced pluripotent stem cell quality using image analysis based on the biological morphology of cells.

    PubMed

    Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori

    2017-10-01

    We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.

  14. Metallicities of Galaxies in the Local Universe

    NASA Astrophysics Data System (ADS)

    Hirschauer, Alec Seth

    2018-01-01

    The degree of heavy-element enrichment for star-forming galaxies in the universe is a fundamental astrophysical characteristic which traces the amount of stellar nucleosynthesis undertaken by the constituent population of stars. Estimating this quantity via the so-called "direct-method" is observationally challenging and requires measurement of intrinsically weak temperature-sensitive nebular emission lines, however these are typically not found for galaxies unless their emission lines are exceptionally bright. Metal abundances ("metallicities") must then therefore be estimated by empirical means utilizing ratios of strong emission lines, calibrated to sources of known abundance and/or theoretical models, which are measurable in essentially any nebular spectrum of a star-forming system. Relationships concerning metallicities in galaxies such as the luminosity-metallicity and mass-metallicity are critically dependent upon reliable estimations of abundances. Therefore, having a reliable observational constraint is paramount to developing models which accurately reflect the universe. This dissertation presentation explores metallicities for galaxies in the local universe through a variety of means. First, an attempt is made to improve calibrations of empirical relationships for estimating abundances for star-forming galaxies at high-metallicities, finding some intrinsic shortcomings but also revealing some interesting new findings regarding the computation of the electron gas of star-forming systems, as well as detecting some anomalously under-abundant, overly-luminous galaxies. Second, the development of a self-consistent scale for estimating metallicities allows for the creation of luminosity-metallicity and mass-metallicity relations for a statistically representative sample of star-forming galaxies in the local universe. Finally, a discovery is made of an extremely metal-poor star-forming galaxy, which opens the possibility to find more similar systems and to better understand star-formation in exceptionally low-abundance environments.

  15. The lure of rationality: Why does the deficit model persist in science communication?

    PubMed

    Simis, Molly J; Madden, Haley; Cacciatore, Michael A; Yeo, Sara K

    2016-05-01

    Science communication has been historically predicated on the knowledge deficit model. Yet, empirical research has shown that public communication of science is more complex than what the knowledge deficit model suggests. In this essay, we pose four lines of reasoning and present empirical data for why we believe the deficit model still persists in public communication of science. First, we posit that scientists' training results in the belief that public audiences can and do process information in a rational manner. Second, the persistence of this model may be a product of current institutional structures. Many graduate education programs in science, technology, engineering, and math (STEM) fields generally lack formal training in public communication. We offer empirical evidence that demonstrates that scientists who have less positive attitudes toward the social sciences are more likely to adhere to the knowledge deficit model of science communication. Third, we present empirical evidence of how scientists conceptualize "the public" and link this to attitudes toward the deficit model. We find that perceiving a knowledge deficit in the public is closely tied to scientists' perceptions of the individuals who comprise the public. Finally, we argue that the knowledge deficit model is perpetuated because it can easily influence public policy for science issues. We propose some ways to uproot the deficit model and move toward more effective science communication efforts, which include training scientists in communication methods grounded in social science research and using approaches that engage community members around scientific issues. © The Author(s) 2016.

  16. Microorganisms isolated from cultures and infection focus and antibiotic treatments in febrile neutropenic children from Şanlıurfa, Turkey.

    PubMed

    Özdemir, Z Canan; Koç, Ahmet; Ayçiçek, Ali

    2016-01-01

    Chemotherapy induced febrile neutropenia predisposes patients to life threatening infections. We aimed to determine the causative microorganisms, infection focus and antibiotic treatment success in febrile neutropenic children with leukemia. A total of 136 febrile neutropenic episodes in 48 leukemic children were reviewed retrospectively from records. Among 136 febrile neutropenic episodes, 68 (50%) episodes were microbiologically documented. Methicillin sensitive coagulase (-) Staphylococcus aureus were the most common isolates from hemoculture (20.5%). The most frequently documented infection focus was mucositis (31.9%). Ceftazidime plus amikacin was the most commonly used antimicrobial treatment for the empirical therapy (52.9%). The overall response rates were 70.5%, 86.9%, and 66.6% of first line, second line and third line therapies, respectively. The spectrum of isolates among febrile neutropenic children in our hematology clinic appears to be gram positive pathogens which are the most common agents. Therefore the, documentation of the flora in each unit could help to decide appropriate empirical therapy which is life saving.

  17. Capturing the Central Line Bundle Infection Prevention Interventions: Comparison of Reflective and Composite Modeling Methods

    PubMed Central

    Gilmartin, Heather M.; Sousa, Karen H.; Battaglia, Catherine

    2016-01-01

    Background The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Objectives Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. Methods A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness-Refined study. The sample was randomly split into exploration and validation subsets. Results The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01), and CLABSIs (reflective = −.28; composite = −.25; p =.01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. Discussion There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled, or with directional ambiguity, to increase transparency and bring confidence to study findings. PMID:27579507

  18. Extensions to the integral line-beam method for gamma-ray skyshine analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.

    1995-08-01

    A computationally simple method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line-beam response function. Both Monte Carlo and pointkernel calculations that account for both annihilation and bremsstrahlung were used in the generation of line beam response functions (LBRF) for gamma-ray energies between 10 and 100 MeV. The LBRF is approximated by a three-parameter formula. By combining results with those obtained in an earlier study for gamma energies below 10 MeV, LBRF values are readily and accurately evaluated for source energies between 0.02 and 100 MeV, for source-to-detector distances between 1 and 3000 m,more » and beam angles as great as 180 degrees. Tables of the parameters for the approximate LBRF are presented. The new response functions are then applied to three simple skyshine geometries, an open silo geometry, an infinite wall, and a rectangular four-wall building. Results are compared to those of previous calculations and to benchmark measurements. A new approach is introduced to account for overhead shielding of the skyshine source and compared to the simplistic exponential-attenuation method used in earlier studies. The effect of the air-ground interface, usually neglected in gamma skyshine studies, is also examined and an empirical correction factor is introduced. Finally, a revised code based on the improved LBRF approximations and the treatment of the overhead shielding is presented, and results shown for several benchmark problems.« less

  19. [Ultra-Fine Pressed Powder Pellet Sample Preparation XRF Determination of Multi-Elements and Carbon Dioxide in Carbonate].

    PubMed

    Li, Xiao-li; An, Shu-qing; Xu, Tie-min; Liu, Yi-bo; Zhang, Li-juan; Zeng, Jiang-ping; Wang, Na

    2015-06-01

    The main analysis error of pressed powder pellet of carbonate comes from particle-size effect and mineral effect. So in the article in order to eliminate the particle-size effect, the ultrafine pressed powder pellet sample preparation is used to the determination of multi-elements and carbon-dioxide in carbonate. To prepare the ultrafine powder the FRITSCH planetary Micro Mill machine and tungsten carbide media is utilized. To conquer the conglomeration during the process of grinding, the wet grinding is preferred. The surface morphology of the pellet is more smooth and neat, the Compton scatter effect is reduced with the decrease in particle size. The intensity of the spectral line is varied with the change of the particle size, generally the intensity of the spectral line is increased with the decrease in the particle size. But when the particle size of more than one component of the material is decreased, the intensity of the spectral line may increase for S, Si, Mg, or decrease for Ca, Al, Ti, K, which depend on the respective mass absorption coefficient . The change of the composition of the phase with milling is also researched. The incident depth of respective element is given from theoretical calculation. When the sample is grounded to the particle size of less than the penetration depth of all the analyte, the effect of the particle size on the intensity of the spectral line is much reduced. In the experiment, when grounded the sample to less than 8 μm(d95), the particle-size effect is much eliminated, with the correction method of theoretical α coefficient and the empirical coefficient, 14 major, minor and trace element in the carbonate can be determined accurately. And the precision of the method is much improved with RSD < 2%, except Na2O. Carbon is ultra-light element, the fluorescence yield is low and the interference is serious. With the manual multi-layer crystal PX4, coarse collimator, empirical correction, X-ray spectrometer can be used to determine the carbon dioxide in the carbonate quantitatively. The intensity of the carbon is increase with the times of the measurement and the time delay even the pellet is stored in the dessicator. So employing the latest pressed powder pellet is suggested.

  20. Response Functions for Neutron Skyshine Analyses

    NASA Astrophysics Data System (ADS)

    Gui, Ah Auu

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.

  1. Semi-empirical and empirical L X-ray production cross sections for elements with 50 ⩽ Z ⩽ 92 for protons of 0.5 3.0 MeV

    NASA Astrophysics Data System (ADS)

    Nekab, M.; Kahoul, A.

    2006-04-01

    We present in this contribution, semi-empirical production cross sections of the main X-ray lines Lα, Lβ and Lγ for elements from Sn to U and for protons with energies varying from 0.5 to 3.0 MeV. The theoretical X-ray production cross sections are firstly calculated from the theoretical ionization cross sections of the L i ( i = 1, 2, 3) subshell within the ECPSSR theory. The semi-empirical Lα, Lβ and Lγ cross sections are then deduced by fitting the available experimental data normalized to their corresponding theoretical values and give the better representation of the experimental data in some cases. On the other hand, the experimental data are directly fitted to deduce the empirical L X-ray production cross sections. A comparison is made between the semi-empirical cross sections, the empirical cross sections reported in this work and the empirical ones reported by Reis and Jesus [M.A. Reis, A.P. Jesus, Atom. Data Nucl. Data Tables 63 (1996) 1] and those of Strivay and Weber [Strivay, G. Weber, Nucl. Instr. and Meth. B 190 (2002) 112].

  2. Modeling of the phase equilibria of polystyrene in methylcyclohexane with semi-empirical quantum mechanical methods I.

    PubMed

    Wilczura-Wachnik, Hanna; Jónsdóttir, Svava Osk

    2003-04-01

    A method for calculating interaction parameters traditionally used in phase-equilibrium computations in low-molecular systems has been extended for the prediction of solvent activities of aromatic polymer solutions (polystyrene+methylcyclohexane). Using ethylbenzene as a model compound for the repeating unit of the polymer, the intermolecular interaction energies between the solvent molecule and the polymer were simulated. The semiempirical quantum chemical method AM1, and a method for sampling relevant internal orientations for a pair of molecules developed previously were used. Interaction energies are determined for three molecular pairs, the solvent and the model molecule, two solvent molecules and two model molecules, and used to calculated UNIQUAC interaction parameters, a(ij) and a(ji). Using these parameters, the solvent activities of the polystyrene 90,000 amu+methylcyclohexane system, and the total vapor pressures of the methylcyclohexane+ethylbenzene system were calculated. The latter system was compared to experimental data, giving qualitative agreement. Figure Solvent activities for the methylcylcohexane(1)+polystyrene(2) system at 316 K. Parameters aij (blue line) obtained with the AM1 method; parameters aij (pink line) from VLE data for the ethylbenzene+methylcyclohexane system. The abscissa is the polymer weight fraction defined as y2(x1)=(1mx1)M2/[x1M1+(1mx1)M2], where x1 is the solvent mole fraction and Mi are the molecular weights of the components.

  3. Mustafa Kemal at Gallipoli: A Leadership Analysis and Terrain Walk

    DTIC Science & Technology

    2016-03-01

    1 AU/ACSC/PICCIRILLI, S/AY16 AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY Mustafa Kemal at Gallipoli: A Leadership Analysis and...Requirements for the Degree of MASTER OF OPERATIONAL ARTS AND SCIENCES Advisor: Mr. Patrick D. Ellis Maxwell Air Force Base, Alabama March 2016...bloody stalemate on the Western Front, knock the Ottoman Empire out of the war, and open a sea line of communication to the Russian Empire. The

  4. Filtration of human EEG recordings from physiological artifacts with empirical mode method

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.

    2017-03-01

    In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.

  5. Empirical ionization fractions in the winds and the determination of mass-loss rates for early-type stars

    NASA Technical Reports Server (NTRS)

    Lamers, H. J. G. L. M.; Gathier, R.; Snow, T. P.

    1980-01-01

    From a study of the UV lines in the spectra of 25 stars from 04 to B1, the empirical relations between the mean density in the wind and the ionization fractions of O VI, N V, Si IV, and the excited C III (2p 3P0) level were derived. Using these empirical relations, a simple relation was derived between the mass-loss rate and the column density of any of these four ions. This relation can be used for a simple determination of the mass-loss rate from O4 to B1 stars.

  6. SU-F-T-158: Experimental Characterization of Field Size Dependence of Dose and Lateral Beam Profiles of Scanning Proton and Carbon Ion Beams for Empirical Model in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Hsi, W; Zhao, J

    2016-06-15

    Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less

  7. Airline Maintenance Manpower Optimization from the De Novo Perspective

    NASA Astrophysics Data System (ADS)

    Liou, James J. H.; Tzeng, Gwo-Hshiung

    Human resource management (HRM) is an important issue for today’s competitive airline marketing. In this paper, we discuss a multi-objective model designed from the De Novo perspective to help airlines optimize their maintenance manpower portfolio. The effectiveness of the model and solution algorithm is demonstrated in an empirical study of the optimization of the human resources needed for airline line maintenance. Both De Novo and traditional multiple objective programming (MOP) methods are analyzed. A comparison of the results with those of traditional MOP indicates that the proposed model and solution algorithm does provide better performance and an improved human resource portfolio.

  8. Airborne electromagnetic bathymetry investigations in Port Lincoln, South Australia - comparison with an equivalent floating transient electromagnetic system

    NASA Astrophysics Data System (ADS)

    Vrbancich, Julian

    2011-09-01

    Helicopter time-domain airborne electromagnetic (AEM) methodology is being investigated as a reconnaissance technique for bathymetric mapping in shallow coastal waters, especially in areas affected by water turbidity where light detection and ranging (LIDAR) and hyperspectral techniques may be limited. Previous studies in Port Lincoln, South Australia, used a floating AEM time-domain system to provide an upper limit to the expected bathymetric accuracy based on current technology for AEM systems. The survey lines traced by the towed floating system were also flown with an airborne system using the same transmitter and receiver electronic instrumentation, on two separate occasions. On the second occasion, significant improvements had been made to the instrumentation to reduce the system self-response at early times. A comparison of the interpreted water depths obtained from the airborne and floating systems is presented, showing the degradation in bathymetric accuracy obtained from the airborne data. An empirical data correction method based on modelled and observed EM responses over deep seawater (i.e. a quasi half-space response) at varying survey altitudes, combined with known seawater conductivity measured during the survey, can lead to significant improvements in interpreted water depths and serves as a useful method for checking system calibration. Another empirical data correction method based on observed and modelled EM responses in shallow water was shown to lead to similar improvements in interpreted water depths; however, this procedure is notably inferior to the quasi half-space response because more parameters need to be assumed in order to compute the modelled EM response. A comparison between the results of the two airborne surveys in Port Lincoln shows that uncorrected data obtained from the second airborne survey gives good agreement with known water depths without the need to apply any empirical corrections to the data. This result significantly decreases the data-processing time thereby enabling the AEM method to serve as a rapid reconnaissance technique for bathymetric mapping.

  9. Metallicity determination of M dwarfs. Expanded parameter range in metallicity and effective temperature

    NASA Astrophysics Data System (ADS)

    Lindgren, Sara; Heiter, Ulrike

    2017-08-01

    Context. Reliable metallicity values for late K and M dwarfs are important for studies of the chemical evolution of the Galaxy and advancement of planet formation theory in low-mass environments. Historically it has been challenging to determine the stellar parameters of low-mass stars because of their low surface temperature, which causes several molecules to form in the photospheric layers. In our work we use the fact that infrared high-resolution spectrographs have opened up a new window for investigating M dwarfs. This enables us to use similar methods as for warmer solar-like stars. Aims: Metallicity determination with high-resolution spectra is more accurate than with low-resolution spectra, but it is rather time consuming. In this paper we expand our sample analyzed with this precise method both in metallicity and effective temperature to build a calibration sample for a future revised empirical calibration. Methods: Because of the relatively few molecular lines in the J band, continuum rectification is possible for high-resolution spectra, allowing the stellar parameters to be determined with greater accuracy than with optical spectra. We obtained high-resolution spectra with the CRIRES spectrograph at the Very Large Telescope (VLT). The metallicity was determined using synthetic spectral fitting of several atomic species. For M dwarfs that are cooler than 3575 K, the line strengths of FeH lines were used to determine the effective temperatures, while for warmer stars a photometric calibration was used. Results: We analyzed 16 targets with a range of effective temperature from 3350-4550 K. The resulting metallicities lie between -0.5< [M/H] < +0.4. A few targets have previously been analyzed using low-resolution spectra and we find a rather good agreement with our values. A comparison with available photometric calibrations shows varying agreement and the spread within all empirical calibrations is large. Conclusions: Including the targets from our previous paper, we analyzed 28 M dwarfs with high-resolution infrared spectra. The targets spread approximately one dex in metallicity and 1400 K in effective temperature. For individual M dwarfs we achieve uncertainties of 0.05 dex and 100 K on average. Based on data obtained at ESO-VLT, Paranal Observatory, Chile, Program ID 090.D-0796(A).

  10. Fast and Accurate Radiative Transfer Calculations Using Principal Component Analysis for (Exo-)Planetary Retrieval Models

    NASA Astrophysics Data System (ADS)

    Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.

    2015-12-01

    Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work on which is under way.

  11. REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negrete, C. Alenka; Dultzin, Deborah; Marziani, Paola

    2013-07-01

    Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR}more » directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.« less

  12. Improved Design of Tunnel Supports : Volume 5 : Empirical Methods in Rock Tunneling -- Review and Recommendations

    DOT National Transportation Integrated Search

    1980-06-01

    Volume 5 evaluates empirical methods in tunneling. Empirical methods that avoid the use of an explicit model by relating ground conditions to observed prototype behavior have played a major role in tunnel design. The main objective of this volume is ...

  13. Fourier Transform Spectroscopy of two trace gases namely Methane and Carbon monoxide for planetary and atmospheric research application

    NASA Astrophysics Data System (ADS)

    Hashemi, R.; Dudaryonok, A. S.; Lavrentieva, N. N.; Vandaele, A. C.; Vander Auwera, J.; Tyuterev, AV Nikitin G., VI; Sung, K.; Smith, M. A. H.; Devi, V. M.; Predoi-Cross, A.

    2017-02-01

    Two atmospheric trace gases, namely methane and carbon monoxide have been considered in this study. Fourier transform absorption spectra of the 2-0 band of 12C16O mixed with CO2 have been recorded at total pressures from 156 to 1212 hPa and at 4 different temperatures between 240 K and 283 K. CO2 pressure-induced line broadening and line shift coefficients, and the associated temperature dependence have been measured in an multi-spectrum non-linear least squares analysis using Voigt profiles with an asymmetric profile due to line mixing. The measured CO2-broadening and CO2-shift parameters were compared with theoretical values, calculated by collaborators. In addition, the CO2-broadening and shift coefficients have been calculated for individual temperatures using the Exponential Power Gap (EPG) semi-empirical method. We also discuss the retrieved line shape parameters for Methane transitions in the spectral range known as the Methane Octad. We used high resolution spectra of pure methane and of dilute mixtures of methane in dry air, recorded with high signal to noise ratio at temperatures between 148 K and room temperature using the Bruker IFS 125 HR Fourier transform spectrometer (FTS) at the Jet Propulsion Laboratory, Pasadena, California. Theoretical calculations for line parameters have been performed and the results are compared with the previously published values and with the line parameters available in the GEISA2015 [1] and HITRAN2012 [2] databases.

  14. Integrating Theory and Practice: Applying the Quality Improvement Paradigm to Product Line Engineering

    NASA Technical Reports Server (NTRS)

    Stark, Michael; Hennessy, Joseph F. (Technical Monitor)

    2002-01-01

    My assertion is that not only are product lines a relevant research topic, but that the tools used by empirical software engineering researchers can address observed practical problems. Our experience at NASA has been there are often externally proposed solutions available, but that we have had difficulties applying them in our particular context. We have also focused on return on investment issues when evaluating product lines, and while these are important, one can not attain objective data on success or failure until several applications from a product family have been deployed. The use of the Quality Improvement Paradigm (QIP) can address these issues: (1) Planning an adoption path from an organization's current state to a product line approach; (2) Constructing a development process to fit the organization's adoption path; (3) Evaluation of product line development processes as the project is being developed. The QIP consists of the following six steps: (1) Characterize the project and its environment; (2) Set quantifiable goals for successful project performance; (3) Choose the appropriate process models, supporting methods, and tools for the project; (4) Execute the process, analyze interim results, and provide real-time feedback for corrective action; (5) Analyze the results of completed projects and recommend improvements; and (6) Package the lessons learned as updated and refined process models. A figure shows the QIP in detail. The iterative nature of the QIP supports an incremental development approach to product lines, and the project learning and feedback provide the necessary early evaluations.

  15. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    NASA Astrophysics Data System (ADS)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  16. NDSD-1000: High-resolution, high-temperature Nitrogen Dioxide Spectroscopic Databank

    NASA Astrophysics Data System (ADS)

    Lukashevskaya, A. A.; Lavrentieva, N. N.; Dudaryonok, A. C.; Perevalov, V. I.

    2016-11-01

    We present a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The databank contains the line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) of the principal isotopologue of NO2. The reference temperature for line intensity is 296 K and the intensity cutoff is 10-25 cm-1/molecule cm-2 at 1000 K. The broadening parameters are presented for two reference temperatures 296 K and 1000 K. The databank has 1,046,808 entries, covers five spectral regions in the 466-4776 cm-1 spectral range and is designed for temperatures up to 1000 K. The databank is based on the global modeling of the line positions and intensities performed within the framework of the method of effective operators. The parameters of the effective Hamiltonian and the effective dipole moment operator have been fitted to the observed values of the line positions and intensities collected from the literature. The broadening coefficients as well as the temperature exponents are calculated using the semi-empirical approach. The databank is useful for studying high-temperature radiative properties of NO2. NDSD-1000 is freely accessible via the internet site of V.E. Zuev Institute of Atmospheric Optics SB RAS ftp://ftp.iao.ru/pub/NDSD/.

  17. ASD-1000: High-resolution, high-temperature acetylene spectroscopic databank

    NASA Astrophysics Data System (ADS)

    Lyulin, O. M.; Perevalov, V. I.

    2017-11-01

    We present a high-resolution, high-temperature version of the Acetylene Spectroscopic Databank called ASD-1000. The databank contains the line parameters (position, intensity, Einstein coefficient for spontaneous emission, term value of the lower states, self- and air-broadening coefficients, temperature dependence exponents of the self- and air-broadening coefficients) of the principal isotopologue of C2H2. The reference temperature for line intensity is 296 K and the intensity cutoff is 10-27 cm-1/(molecule cm-2) at 1000 K. The databank has 33,890,981 entries and covers the 3-10,000 cm-1 spectral range. The databank is based on the global modeling of the line positions and intensities performed within the framework of the method of effective operators. The parameters of the effective Hamiltonian and the effective dipole moment operator have been fitted to the observed values of the line positions and intensities collected from the literature. The broadening coefficients as well as their temperature dependence exponents were calculated using the empirical equations. The databank is useful for studying high-temperature radiative properties of C2H2. ASD-1000 is freely accessible via the Internet site of V.E. Zuev Institute of Atmospheric Optics SB RAS ftp://ftp.iao.ru/pub/ASD1000/.

  18. Going from "paper and pen" to ICT systems: Perspectives on managing the change process.

    PubMed

    Andersson Marchesoni, Maria; Axelsson, Karin; Fältholm, Ylva; Lindberg, Inger

    2017-03-01

    Lack of participation from staff when developing information and communication technologies (ICT) has been shown to lead to negative consequences and might be one explanation for failure. Management during development processes has rarely been empirically studied, especially when introducing ICT systems in a municipality context. To describe and interpret experiences of the management during change processes where ICT was introduced among staff and managers in elderly care. A qualitative interpretive method was chosen for this study and content analysis for analyzing the interviews. "Clear focus-unclear process" demonstrated that focus on ICT solutions was clear but the process of introducing the ICT was not. "First-line managers receiving a system of support" gave a picture of the first-line manager as not playing an active part in the projects. First-line managers and staff described "Low power to influence" when realizing that for some reasons, they had not contributed in the change projects. "Low confirmation" represented the previous and present feelings of staff not being listened to. Lastly, "Reciprocal understanding" pictures how first-line managers and staff, although having some expectations on each other, understood each other's positions. Empowerment could be useful in creating an organization where critical awareness and reflection over daily practice becomes a routine.

  19. Philosophy and the front line of science.

    PubMed

    Pernu, Tuomas K

    2008-03-01

    According to one traditional view, empirical science is necessarily preceded by philosophical analysis. Yet the relevance of philosophy is often doubted by those engaged in empirical sciences. I argue that these doubts can be substantiated by two theoretical problems that the traditional conception of philosophy is bound to face. First, there is a strong normative etiology to philosophical problems, theories, and notions that is dfficult to reconcile with descriptive empirical study. Second, conceptual analysis (a role that is typically assigned to philosophy) seems to lose its object of study if it is granted that terms do not have purely conceptual meanings detached from their actual use in empirical sciences. These problems are particularly acute to the current naturalistic philosophy of science. I suggest a more concrete integration of philosophy and the sciences as a possible way of making philosophy of science have more impact.

  20. Identifying core competencies for public health epidemiologists.

    PubMed

    Bondy, Susan J; Johnson, Ian; Cole, Donald C; Bercovitz, Kim

    2008-01-01

    Public health authorities have prioritized the identification of competencies, yet little empirical data exist to support decisions on competency selection among particular disciplines. We sought perspectives on important competencies among epidemiologists familiar with or practicing in public health settings (local to national). Using a sequential, qualitative-quantitative mixed method design, we conducted key informant interviews with 12 public health practitioners familiar with front-line epidemiologists' practice, followed by a web-based survey of members of a provincial association of public health epidemiologists (90 respondents of 155 eligible) and a consensus workshop. Competency statements were drawn from existing core competency lists and those identified by key informants, and ranked by extent of agreement in importance for entry-level practitioners. Competencies in quantitative methods and analysis, critical appraisal of scientific evidence and knowledge transfer of scientific data to other members of the public health team were all regarded as very important for public health epidemiologists. Epidemiologist competencies focused on the provision, interpretation and 'translation' of evidence to inform decision-making by other public health professionals. Considerable tension existed around some potential competency items, particularly in the areas of more advanced database and data-analytic skills. Empirical data can inform discussions of discipline-specific competencies as one input to decisions about competencies appropriate for epidemiologists in the public health workforce.

  1. The management of acute uncomplicated cystitis in adult women by family physicians in Canada

    PubMed Central

    McIsaac, Warren J; Prakash, Preeti; Ross, Susan

    2008-01-01

    INTRODUCTION There are few Canadian studies that have assessed prescribing patterns and antibiotic preferences of physicians for acute uncomplicated cystitis. A cross-Canada study of adult women with symptoms of acute cystitis seen by primary care physicians was conducted to determine current management practices and first-line antibiotic choices. METHODS A random sample of 2000 members of The College of Family Physicians of Canada were contacted in April 2002, and were asked to assess two women presenting with new urinary tract symptoms. Physicians completed a standardized checklist of symptoms and signs, indicated their diagnosis and antibiotics prescribed. A urine sample for culture was obtained. RESULTS Of the 418 responding physicians, 246 (58.6%) completed the study and assessed 446 women between April 2002 and March 2003. Most women (412 of 420, for whom clinical information about antibiotic prescriptions was available) reported either frequency, urgency or painful urination. Physicians would have usually ordered a urine culture for 77.0% of the women (95% CI 72.7 to 80.8) and prescribed an antibiotic for 86.9% of the women (95% CI 83.3 to 90.0). The urine culture was negative for 32.8% of these prescriptions. The most commonly prescribed antibiotic was trimethoprim/sulfamethoxazole (40.8%; 95% CI 35.7 to 46.1), followed by fluoroquinolones (27.4%; 95% CI 22.9 to 32.3) and nitrofurantoin (26.6%; 95% CI 22.1 to 31.4). CONCLUSION Empirical antibiotic prescribing is standard practice in the community, but is associated with high levels of unnecessary antibiotic use. While trimethoprim/sulfamethoxazole is the first-line empirical antibiotic choice, fluoroquinolone antibiotics have become the second most commonly prescribed empirical antibiotic for acute cystitis. The effect of current prescribing patterns on community levels of quinolone-resistant Escherichia coli may need to be monitored. PMID:19436509

  2. How rational should bioethics be? The value of empirical approaches.

    PubMed

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  3. Frequency and Antibiotic Resistance of Bacteria Implicated in Community Urinary Tract Infections in North Aveiro Between 2011 and 2014.

    PubMed

    Costa, Tânia; Linhares, Inês; Ferreira, Ricardo; Neves, Jasmin; Almeida, Adelaide

    2018-05-01

    The present study aims to evaluate the predominance of uropathogens responsible for urinary tract infection (UTI) and determine their resistance patterns, to assess if the recommended empirical treatment is appropriate for the studied population. Samples were collected in Aveiro (Portugal) from an ambulatory service between June 2011 and June 2014. From the 4,270 positive urine samples for UTI, 3,561 (83%) were from women and only 709 (17%) were from men. The bacterium Escherichia coli was the most frequent uropathogen, followed by Klebsiella sp., Enterococcus sp., and Proteus mirabilis. E. coli was also the uropathogen presenting less resistance to antibiotics, including those recommended as first and second line UTI treatment. In general, bacteria isolated from men were more resistant to antimicrobials than bacteria isolated from women. The results of this study emphasized the relevance to consider sex as a differentiating factor in the choice of UTI empirical treatment, mainly due to differences in antimicrobial resistance. From the first line drugs recommended by the European Association of Urology (EAU) to empirical treatment of uncomplicated UTI, nitrofurantoin is the most appropriate drug for both sexes. Ciprofloxacin, although appropriate for treatment in women, is not appropriate to treat UTIs in men. From the second line drugs, both trimethoprim-sulfamethoxazole (TMP-SMX) and amoxicillin-clavulanic acid (AMX-CA) are appropriate drugs for treatment of uncomplicated UTI in women, but not as effective for men.

  4. The velocity field of growing ear cartilage.

    PubMed Central

    Cox, R W; Peacock, M A

    1978-01-01

    The velocity vector field of the growing rabbit ear cartilage has been investigated between 12 and 299 days. Empirical curves have been computed for path lines and for velocities between 12 and 87 days. The tissue movement has been found to behave as an irrotational flow of material. Stream lines and velocity equipotential lines have been calculated and provide akinematic description of the changes during growth. The importance of a knowledge of the velocity vector in physical descriptions of growth and morphological differentiation at the tissue and cellular levels is emphasized. PMID:689993

  5. Political efficacy and familiarity as predictors of attitudes towards electric transmission lines in the United States

    DOE PAGES

    Joe, Jeffrey C.; Hendrickson, Kelsie; Wong, Maria; ...

    2016-05-18

    Public opposition to the construction (i.e., siting) of new high voltage overhead transmission lines is not a new or isolated phenomenon. Past research has posited a variety of reasons, applied general theories, and has provided empirical evidence to explain public opposition. The existing literature, while clarifying many elements of the issue, does not yet fully explain the complexities underlying this public opposition phenomenon. As a result, the current study demonstrated how two overlooked factors, people’s sense of political efficacy and their familiarity (i.e., prior exposure) with transmission lines, explained attitudes of support and opposition to siting new power lines.

  6. Political efficacy and familiarity as predictors of attitudes towards electric transmission lines in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe, Jeffrey C.; Hendrickson, Kelsie; Wong, Maria

    Public opposition to the construction (i.e., siting) of new high voltage overhead transmission lines is not a new or isolated phenomenon. Past research has posited a variety of reasons, applied general theories, and has provided empirical evidence to explain public opposition. The existing literature, while clarifying many elements of the issue, does not yet fully explain the complexities underlying this public opposition phenomenon. As a result, the current study demonstrated how two overlooked factors, people’s sense of political efficacy and their familiarity (i.e., prior exposure) with transmission lines, explained attitudes of support and opposition to siting new power lines.

  7. Supercomputer modelling of an electronic structure for KCl nanocrystal with edge dislocation with the use of semiempirical and nonempirical models

    NASA Astrophysics Data System (ADS)

    Timoshenko, Yu K.; Shunina, V. A.; Shashkin, A. I.

    2018-03-01

    In the present work we used semiempirical and non-empirical models for electronic states of KCl nanocrystal containing edge dislocation for comparison of the obtained results. Electronic levels and local densities of states were calculated. As a result we found a reasonable qualitative correlation of semiempirical and non-empirical results. Using the results of computer modelling we discuss the problem of localization of electronic states near the line of edge dislocation.

  8. Soot and Spectral Radiation Modeling for a High-Pressure Turbulent Spray Flame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreryo-Fernandez, Sebastian; Paul, Chandan; Sircar, Arpan

    Simulations are performed of a transient high-pressure turbulent n-dodecane spray flame under engine-relevant conditions. An unsteady RANS formulation is used, with detailed chemistry, a semi-empirical two-equation soot model, and a particle-based transported composition probability density function (PDF) method to account for unresolved turbulent fluctuations in composition and temperature. Results from the PDF model are compared with those from a locally well-stirred reactor (WSR) model to quantify the effects of turbulence-chemistry-soot interactions. Computed liquid and vapor penetration versus time, ignition delay, and flame lift-off height are in good agreement with experiment, and relatively small differences are seen between the WSR andmore » PDF models for these global quantities. Computed soot levels and spatial soot distributions from the WSR and PDF models show large differences, with PDF results being in better agreement with experimental measurements. An uncoupled photon Monte Carlo method with line-by-line spectral resolution is used to compute the spectral intensity distribution of the radiation leaving the flame. This provides new insight into the relative importance of molecular gas radiation versus soot radiation, and the importance of turbulent fluctuations on radiative heat transfer.« less

  9. Cutting the Composite Gordian Knot: Untangling the AGN-Starburst Threads in Single Aperture Spectra

    NASA Astrophysics Data System (ADS)

    Flury, Sophia; Moran, Edward C.

    2018-01-01

    Standard emission line diagnostics are able to segregate star-forming galaxies and Seyfert nuclei, and it is often assumed that ambiguous emission-line galaxies falling between these two populations are “composite” objects exhibiting both types of photoionization. We have developed a method that predicts the most probable H II and AGN components that could plausibly explain the “composite” classed objects solely on the basis of their SDSS spectra. The majority of our analysis is driven by empirical relationships revealed by SDSS data rather than theoretical models founded in assumptions. To verify our method, we have compared the predictions of our model with publicly released IFU data from the S7 survey and find that composite objects are not in fact a simple linear combination of the two types of emission. The data reveal a key component in the mixing sequence: geometric dilution of the ionizing radiation which powers the NLR of the active nucleus. When accounting for this effect, our model is successful when applied to several composite-class galaxies. Some objects, however, appear to be at variance with the predicted results, suggesting they may not be powered by black hole accretion.

  10. Capturing the Central Line Bundle Infection Prevention Interventions: Comparison of Reflective and Composite Modeling Methods.

    PubMed

    Gilmartin, Heather M; Sousa, Karen H; Battaglia, Catherine

    2016-01-01

    The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness Refined study. The sample was randomly split into exploration and validation subsets. The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01) and CLABSIs (reflective = -.28; composite = -.25; p = .01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled or with directional ambiguity to increase transparency and bring confidence to study findings.

  11. Response: Reading between the lines of cancer screening trials: using modeling to understand the evidence.

    PubMed

    Etzioni, Ruth; Gulati, Roman

    2013-04-01

    In our article about limitations of basing screening policy on screening trials, we offered several examples of ways in which modeling, using data from large screening trials and population trends, provided insights that differed somewhat from those based only on empirical trial results. In this editorial, we take a step back and consider the general question of whether randomized screening trials provide the strongest evidence for clinical guidelines concerning population screening programs. We argue that randomized trials provide a process that is designed to protect against certain biases but that this process does not guarantee that inferences based on empirical results from screening trials will be unbiased. Appropriate quantitative methods are key to obtaining unbiased inferences from screening trials. We highlight several studies in the statistical literature demonstrating that conventional survival analyses of screening trials can be misleading and list a number of key questions concerning screening harms and benefits that cannot be answered without modeling. Although we acknowledge the centrality of screening trials in the policy process, we maintain that modeling constitutes a powerful tool for screening trial interpretation and screening policy development.

  12. Cospatial Longslit UV-Optical Spectra of Ten Galactic Planetary Nebulae with HST STIS: Description of observations, global emission-line measurements, and empirical CNO abundances

    NASA Astrophysics Data System (ADS)

    Dufour, R. J.; Kwitter, K. B.; Shaw, R. A.; Balick, B.; Henry, R. B. C.; Miller, T. R.; Corradi, R. L. M.

    2015-01-01

    This poster describes details of HST Cycle 19 (program GO 12600), which was awarded 32 orbits of observing time with STIS to obtain the first cospatial UV-optical spectra of 10 Galactic planetary nebulae (PNe). The observational goal was to measure the UV emission lines of carbon and nitrogen with unprecedented S/N and wavelength and spatial resolution along the disk of each object over a wavelength range 1150-10270 Ang . The PNe were chosen such that each possessed a near-solar metallicity but the group together spanned a broad range in N/O. This poster concentrates on describing the observations, emission-line measurements integrated along the entire slit lengths, ionic abundances, and estimated total elemental abundances using empirical ionization correction factors and the ELSA code. Related posters by co-authors in this session concentrate on analyzing CNO abundances, progenitor masses and nebular properties of the best-observed targets using photoionization modeling of the global emission-line measurements [Henry et al.] or detailed analyses of spatial variations in electron temperatures, densities, and abundances along the sub arcsecond resolution slits [Miller et al. & Shaw et al.]. We gratefully acknowledge AURA/STScI for the GO 12600 program support, both observational and financial.

  13. Positive Psychology in Cancer Care: A Story Line Resistant to Evidence

    PubMed Central

    Tennen, Howard; Ranchor, Adelita V.

    2010-01-01

    Background Aspinwall and Tedeschi (Ann Behav Med, 2010) summarize evidence they view as supporting links between positive psychological states, including sense of coherence (SOC) and optimism and health outcomes, and they refer to persistent assumptions that interfere with understanding how positive states predict health. Purpose We critically evaluate Aspinwall and Tedeschi’s assertions. Methods We examine evidence related to SOC and optimism in relation to physical health, and revisit proposed processes linking positive psychological states to health outcomes, particularly via the immune system in cancer. Results Aspinwall and Tedeschi’s assumptions regarding SOC and optimism are at odds with available evidence. Proposed pathways between positive psychological states and cancer outcomes are not supported by existing data. Aspinwall and Tedeschi’s portrayal of persistent interfering assumptions echoes a disregard of precedent in the broader positive psychology literature. Conclusion Positive psychology’s interpretations of the literature regarding positive psychological states and cancer outcomes represent a self-perpetuating story line without empirical support. PMID:20186581

  14. Moral Stress, Moral Practice, and Ethical Climate in Community-Based Drug-Use Research: Views From the Front Line.

    PubMed

    Fisher, Celia B; True, Gala; Alexander, Leslie; Fried, Adam L

    2013-01-01

    The role of front-line researchers, those whose responsibilities include face-to-face contact with participants, is critical to ensuring the responsible conduct of community-based drug use research. To date, there has been little empirical examination of how front-line researchers perceive the effectiveness of ethical procedures in their real-world application and the moral stress they may experience when adherence to scientific procedures appears to conflict with participant protections. This study represents a first step in applying psychological science to examine the work-related attitudes, ethics climate, and moral dilemmas experienced by a national sample of 275 front-line staff members whose responsibilities include face-to-face interaction with participants in community-based drug-use research. Using an anonymous Web-based survey we psychometrically evaluated and examined relationships among six new scales tapping moral stress (frustration in response to perceived barriers to conducting research in a morally appropriate manner); organizational ethics climate; staff support; moral practice dilemmas (perceived conflicts between scientific integrity and participant welfare); research commitment; and research mistrust. As predicted, front-line researchers who evidence a strong commitment to their role in the research process and who perceive their organizations as committed to research ethics and staff support experienced lower levels of moral stress. Front-line researchers who were distrustful of the research enterprise and frequently grappled with moral practice dilemmas reported higher levels of moral stress. Applying psychometrically reliable scales to empirically examine research ethics challenges can illuminate specific threats to scientific integrity and human subjects protections encountered by front-line staff and suggest organizational strategies for reducing moral stress and enhancing the responsible conduct of research.

  15. Dual therapy for third-line Helicobacter pylori eradication and urea breath test prediction

    PubMed Central

    Nishizawa, Toshihiro; Suzuki, Hidekazu; Maekawa, Takama; Harada, Naohiko; Toyokawa, Tatsuya; Kuwai, Toshio; Ohara, Masanori; Suzuki, Takahiro; Kawanishi, Masahiro; Noguchi, Kenji; Yoshio, Toshiyuki; Katsushima, Shinji; Tsuruta, Hideo; Masuda, Eiji; Tanaka, Munehiro; Katayama, Shunsuke; Kawamura, Norio; Nishizawa, Yuko; Hibi, Toshifumi; Takahashi, Masahiko

    2012-01-01

    We evaluated the efficacy and tolerability of a dual therapy with rabeprazole and amoxicillin (AMX) as an empiric third-line rescue therapy. In patients with failure of first-line treatment with a proton pump inhibitor (PPI)-AMX-clarithromycin regimen and second-line treatment with the PPI-AMX-metronidazole regimen, a third-line eradication regimen with rabeprazole (10 mg q.i.d.) and AMX (500 mg q.i.d.) was prescribed for 2 wk. Eradication was confirmed by the results of the 13C-urea breath test (UBT) at 12 wk after the therapy. A total of 46 patients were included; however, two were lost to follow-up. The eradication rates as determined by per-protocol and intention-to-treat analyses were 65.9% and 63.0%, respectively. The pretreatment UBT results in the subjects showing eradication failure; those patients showing successful eradication comprised 32.9 ± 28.8 permil and 14.8 ± 12.8 permil, respectively. The pretreatment UBT results in the subjects with eradication failure were significantly higher than those in the patients with successful eradication (P = 0.019). A low pretreatment UBT result (≤ 28.5 permil) predicted the success of the eradication therapy with a positive predictive value of 81.3% and a sensitivity of 89.7%. Adverse effects were reported in 18.2% of the patients, mainly diarrhea and stomatitis. Dual therapy with rabeprazole and AMX appears to serve as a potential empirical third-line strategy for patients with low values on pretreatment UBT. PMID:22690086

  16. Translating Mendelian and complex inheritance of Alzheimer's disease genes for predicting unique personal genome variants

    PubMed Central

    Regan, Kelly; Wang, Kanix; Doughty, Emily; Li, Haiquan; Li, Jianrong; Lee, Younghee; Kann, Maricel G

    2012-01-01

    Objective Although trait-associated genes identified as complex versus single-gene inheritance differ substantially in odds ratio, the authors nonetheless posit that their mechanistic concordance can reveal fundamental properties of the genetic architecture, allowing the automated interpretation of unique polymorphisms within a personal genome. Materials and methods An analytical method, SPADE-gen, spanning three biological scales was developed to demonstrate the mechanistic concordance between Mendelian and complex inheritance of Alzheimer's disease (AD) genes: biological functions (BP), protein interaction modeling, and protein domain implicated in the disease-associated polymorphism. Results Among Gene Ontology (GO) biological processes (BP) enriched at a false detection rate <5% in 15 AD genes of Mendelian inheritance (Online Mendelian Inheritance in Man) and independently in those of complex inheritance (25 host genes of intragenic AD single-nucleotide polymorphisms confirmed in genome-wide association studies), 16 overlapped (empirical p=0.007) and 45 were similar (empirical p<0.009; information theory). SPAN network modeling extended the canonical pathway of AD (KEGG) with 26 new protein interactions (empirical p<0.0001). Discussion The study prioritized new AD-associated biological mechanisms and focused the analysis on previously unreported interactions associated with the biological processes of polymorphisms that affect specific protein domains within characterized AD genes and their direct interactors using (1) concordant GO-BP and (2) domain interactions within STRING protein–protein interactions corresponding to the genomic location of the AD polymorphism (eg, EPHA1, APOE, and CD2AP). Conclusion These results are in line with unique-event polymorphism theory, indicating how disease-associated polymorphisms of Mendelian or complex inheritance relate genetically to those observed as ‘unique personal variants’. They also provide insight for identifying novel targets, for repositioning drugs, and for personal therapeutics. PMID:22319180

  17. Adaptive Filtration of Physiological Artifacts in EEG Signals in Humans Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, V. V.; Runnova, A. E.; Hramov, A. E.

    2018-05-01

    A new method for adaptive filtration of experimental EEG signals in humans and for removal of different physiological artifacts has been proposed. The algorithm of the method includes empirical mode decomposition of EEG, determination of the number of empirical modes that are considered, analysis of the empirical modes and search for modes that contains artifacts, removal of these modes, and reconstruction of the EEG signal. The method was tested on experimental human EEG signals and demonstrated high efficiency in the removal of different types of physiological EEG artifacts.

  18. A Procedure for Structural Weight Estimation of Single Stage to Orbit Launch Vehicles (Interim User's Manual)

    NASA Technical Reports Server (NTRS)

    Martinovic, Zoran N.; Cerro, Jeffrey A.

    2002-01-01

    This is an interim user's manual for current procedures used in the Vehicle Analysis Branch at NASA Langley Research Center, Hampton, Virginia, for launch vehicle structural subsystem weight estimation based on finite element modeling and structural analysis. The process is intended to complement traditional methods of conceptual and early preliminary structural design such as the application of empirical weight estimation or application of classical engineering design equations and criteria on one dimensional "line" models. Functions of two commercially available software codes are coupled together. Vehicle modeling and analysis are done using SDRC/I-DEAS, and structural sizing is performed with the Collier Research Corp. HyperSizer program.

  19. Massive Stars in the SDSS-IV/APOGEE SURVEY. I. OB Stars

    NASA Astrophysics Data System (ADS)

    Roman-Lopes, A.; Román-Zúñiga, C.; Tapia, Mauricio; Chojnowski, Drew; Gómez Maqueo Chew, Y.; García-Hernández, D. A.; Borissova, Jura; Minniti, Dante; Covey, Kevin R.; Longa-Peña, Penélope; Fernandez-Trincado, J. G.; Zamora, Olga; Nitschelm, Christian

    2018-03-01

    In this work, we make use of DR14 APOGEE spectroscopic data to study a sample of 92 known OB stars. We developed a near-infrared semi-empirical spectral classification method that was successfully used in case of four new exemplars, previously classified as later B-type stars. Our results agree well with those determined independently from ECHELLE optical spectra, being in line with the spectral types derived from the “canonical” MK blue optical system. This confirms that the APOGEE spectrograph can also be used as a powerful tool in surveys aiming to unveil and study a large number of moderately and highly obscured OB stars still hidden in the Galaxy.

  20. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  1. Stellar Absorption Line Analysis of Local Star-forming Galaxies: The Relation between Stellar Mass, Metallicity, Dust Attenuation, and Star Formation Rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jabran Zahid, H.; Kudritzki, Rolf-Peter; Ho, I-Ting

    We analyze the optical continuum of star-forming galaxies in the Sloan Digital Sky Survey by fitting stacked spectra with stellar population synthesis models to investigate the relation between stellar mass, stellar metallicity, dust attenuation, and star formation rate. We fit models calculated with star formation and chemical evolution histories that are derived empirically from multi-epoch observations of the stellar mass–star formation rate and the stellar mass–gas-phase metallicity relations, respectively. We also fit linear combinations of single-burst models with a range of metallicities and ages. Star formation and chemical evolution histories are unconstrained for these models. The stellar mass–stellar metallicity relationsmore » obtained from the two methods agree with the relation measured from individual supergiant stars in nearby galaxies. These relations are also consistent with the relation obtained from emission-line analysis of gas-phase metallicity after accounting for systematic offsets in the gas-phase metallicity. We measure dust attenuation of the stellar continuum and show that its dependence on stellar mass and star formation rate is consistent with previously reported results derived from nebular emission lines. However, stellar continuum attenuation is smaller than nebular emission line attenuation. The continuum-to-nebular attenuation ratio depends on stellar mass and is smaller in more massive galaxies. Our consistent analysis of stellar continuum and nebular emission lines paves the way for a comprehensive investigation of stellar metallicities of star-forming and quiescent galaxies.« less

  2. Empirical Temperature Measurement in Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Weaver, Erik; Isella, Andrea; Boehler, Yann

    2018-02-01

    The accurate measurement of temperature in protoplanetary disks is critical to understanding many key features of disk evolution and planet formation, from disk chemistry and dynamics, to planetesimal formation. This paper explores the techniques available to determine temperatures from observations of single, optically thick molecular emission lines. Specific attention is given to issues such as the inclusion of optically thin emission, problems resulting from continuum subtraction, and complications of real observations. Effort is also made to detail the exact nature and morphology of the region emitting a given line. To properly study and quantify these effects, this paper considers a range of disk models, from simple pedagogical models to very detailed models including full radiative transfer. Finally, we show how the use of the wrong methods can lead to potentially severe misinterpretations of data, leading to incorrect measurements of disk temperature profiles. We show that the best way to estimate the temperature of emitting gas is to analyze the line peak emission map without subtracting continuum emission. Continuum subtraction, which is commonly applied to observations of line emission, systematically leads to underestimation of the gas temperature. We further show that once observational effects such as beam dilution and noise are accounted for, the line brightness temperature derived from the peak emission is reliably within 10%–15% of the physical temperature of the emitting region, assuming optically thick emission. The methodology described in this paper will be applied in future works to constrain the temperature, and related physical quantities, in protoplanetary disks observed with ALMA.

  3. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  4. [Near infrared analysis of blending homogeneity of Chinese medicine formula particles based on moving window F test method].

    PubMed

    Yang, Chan; Xu, Bing; Zhang, Zhi-Qiang; Wang, Xin; Shi, Xin-Yuan; Fu, Jing; Qiao, Yan-Jiang

    2016-10-01

    Blending uniformity is essential to ensure the homogeneity of Chinese medicine formula particles within each batch. This study was based on the blending process of ebony spray dried powder and dextrin(the proportion of dextrin was 10%),in which the analysis of near infrared (NIR) diffuse reflectance spectra was collected from six different sampling points in combination with moving window F test method in order to assess the blending uniformity of the blending process.The method was validated by the changes of citric acid content determined by the HPLC. The results of moving window F test method showed that the ebony spray dried powder and dextrin was homogeneous during 200-300 r and was segregated during 300-400 r. An advantage of this method is that the threshold value is defined statistically, not empirically and thus does not suffer from threshold ambiguities in common with the moving block standard deviatiun (MBSD). And this method could be employed to monitor other blending process of Chinese medicine powders on line. Copyright© by the Chinese Pharmaceutical Association.

  5. What Quasars Really Look Like: Unification of the Emission and Absorption Line Regions

    NASA Technical Reports Server (NTRS)

    Elvis, Martin

    2000-01-01

    We propose a simple unifying structure for the inner regions of quasars and AGN. This empirically derived model links together the broad absorption line (BALS), the narrow UV/X-ray ionized absorbers, the BELR, and the 5 Compton scattering/fluorescing regions into a single structure. The model also suggests an alternative origin for the large-scale bi-conical outflows. Some other potential implications of this structure are discussed.

  6. Comparison of the Various Methodologies Used in Studying Runoff and Sediment Load in the Yellow River Basin

    NASA Astrophysics Data System (ADS)

    Xu, M., III; Liu, X.

    2017-12-01

    In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.

  7. Moisture and drug solid-state monitoring during a continuous drying process using empirical and mass balance models.

    PubMed

    Fonteyne, Margot; Gildemyn, Delphine; Peeters, Elisabeth; Mortier, Séverine Thérèse F C; Vercruysse, Jurgen; Gernaey, Krist V; Vervaet, Chris; Remon, Jean Paul; Nopens, Ingmar; De Beer, Thomas

    2014-08-01

    Classically, the end point detection during fluid bed drying has been performed using indirect parameters, such as the product temperature or the humidity of the outlet drying air. This paper aims at comparing those classic methods to both in-line moisture and solid-state determination by means of Process Analytical Technology (PAT) tools (Raman and NIR spectroscopy) and a mass balance approach. The six-segmented fluid bed drying system being part of a fully continuous from-powder-to-tablet production line (ConsiGma™-25) was used for this study. A theophylline:lactose:PVP (30:67.5:2.5) blend was chosen as model formulation. For the development of the NIR-based moisture determination model, 15 calibration experiments in the fluid bed dryer were performed. Six test experiments were conducted afterwards, and the product was monitored in-line with NIR and Raman spectroscopy during drying. The results (drying endpoint and residual moisture) obtained via the NIR-based moisture determination model, the classical approach by means of indirect parameters and the mass balance model were then compared. Our conclusion is that the PAT-based method is most suited for use in a production set-up. Secondly, the different size fractions of the dried granules obtained during different experiments (fines, yield and oversized granules) were compared separately, revealing differences in both solid state of theophylline and moisture content between the different granule size fractions. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. An experimental water line list at 1950 K in the 6250-6670 cm-1 region

    NASA Astrophysics Data System (ADS)

    Rutkowski, Lucile; Foltynowicz, Aleksandra; Schmidt, Florian M.; Johansson, Alexandra C.; Khodabakhsh, Amir; Kyuberis, Aleksandra A.; Zobov, Nikolai F.; Polyansky, Oleg L.; Yurchenko, Sergei N.; Tennyson, Jonathan

    2018-01-01

    An absorption spectrum of H216O at 1950 K is recorded in a premixed methane/air flat flame using a cavity-enhanced optical frequency comb-based Fourier transform spectrometer. 2417 absorption lines are identified in the 6250-6670 cm-1 region with an accuracy of about 0.01 cm-1. Absolute line intensities are retrieved using temperature and concentration values obtained by tunable diode laser absorption spectroscopy. Line assignments are made using a combination of empirically known energy levels and predictions from the new POKAZATEL variational line list. 2030 of the observed lines are assigned to 2937 transitions, once blends are taken into account. 126 new energy levels of H216O are identified. The assigned transitions belong to 136 bands and span rotational states up to J = 27 .

  9. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  10. Empirical first-line treatment with tigecycline for febrile episodes following abdominal surgery in cancer patients.

    PubMed

    Secondo, Giovanni; Vassallo, Francesca; Solari, Nicola; Moresco, Luciano; Percivale, Pierluigi; Zappi, Lucia; Cafiero, Ferdinando; De Maria, Andrea

    2010-11-01

    Cancer patients with complicated infections following abdominal surgery represent one of the worst clinical scenarios that is useful for testing the efficacy of empirical antimicrobial therapy. No study so far has evaluated the performance of tigecycline (TIG) when administered as empirical first-line treatment in a homogeneous population of surgical cancer patients with a febrile episode. An observational review of the data records of 24 sequential patients receiving TIG for a febrile episode following a major abdominal procedure in a single cancer institute was performed. Large bowel surgery represented 68% of all procedures, followed by gastric surgery (16%) and urinary-gynaecologic-biliary surgery (16%). Complications following surgery were observed in 68% of febrile episodes, with peritonitis and sepsis accounting for 59% and 24% of complications, respectively. Eight patients needed repeat surgery for source control. The mean duration of TIG treatment was 8 days. Causative pathogens were detected in 16 episodes (64%), and a total of 44 microorganisms were recovered (29% Escherichia coli, 9% Enterococcus faecalis and 9% coagulase-negative staphylococci). TIG was effective in 12 episodes (48%). The success rate was 67% when infectious episodes sustained by intrinsically resistant bacteria and fungi were excluded. Treatment failure was associated with the presence of complications and with microbiologically documented infection. TIG may be useful as a first-line treatment option in cancer patients requiring antibiotic treatment following surgery when complications are not present or suspected on clinical grounds and when local microbial epidemiology shows a low incidence of primary resistant bacteria. Copyright © 2010 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.

  11. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Templeton, D C; Harris, D B

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less

  12. The effect of fiscal policy on diet, obesity and chronic disease: a systematic review

    PubMed Central

    Jan, Stephen; Leeder, Stephen; Swinburn, Boyd

    2010-01-01

    Abstract Objective To assess the effect of food taxes and subsidies on diet, body weight and health through a systematic review of the literature. Methods We searched the English-language published and grey literature for empirical and modelling studies on the effects of monetary subsidies or taxes levied on specific food products on consumption habits, body weight and chronic conditions. Empirical studies were dealing with an actual tax, while modelling studies predicted outcomes based on a hypothetical tax or subsidy. Findings Twenty-four studies met the inclusion criteria: 13 were from the peer-reviewed literature and 11 were published on line. There were 8 empirical and 16 modelling studies. Nine studies assessed the impact of taxes on food consumption only, 5 on consumption and body weight, 4 on consumption and disease and 6 on body weight only. In general, taxes and subsidies influenced consumption in the desired direction, with larger taxes being associated with more significant changes in consumption, body weight and disease incidence. However, studies that focused on a single target food or nutrient may have overestimated the impact of taxes by failing to take into account shifts in consumption to other foods. The quality of the evidence was generally low. Almost all studies were conducted in high-income countries. Conclusion Food taxes and subsidies have the potential to contribute to healthy consumption patterns at the population level. However, current evidence is generally of low quality and the empirical evaluation of existing taxes is a research priority, along with research into the effectiveness and differential impact of food taxes in developing countries. PMID:20680126

  13. THE COUPLED EVOLUTION OF ELECTRONS AND IONS IN CORONAL MASS EJECTION-DRIVEN SHOCKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manchester IV, W. B.; Van der Holst, B.; Toth, G.

    2012-09-01

    We present simulations of coronal mass ejections (CMEs) performed with a new two-temperature coronal model developed at the University of Michigan, which is able to address the coupled thermodynamics of the electron and proton populations in the context of a single fluid. This model employs heat conduction for electrons, constant adiabatic index ({gamma} = 5/3), and includes Alfven wave pressure to accelerate the solar wind. The Wang-Sheeley-Arge empirical model is used to determine the Alfven wave pressure necessary to produce the observed bimodal solar wind speed. The Alfven waves are dissipated as they propagate from the Sun and heat protonsmore » on open magnetic field lines to temperatures above 2 MK. The model is driven by empirical boundary conditions that includes GONG magnetogram data to calculate the coronal field, and STEREO/EUVI observations to specify the density and temperature at the coronal boundary by the Differential Emission Measure Tomography method. With this model, we simulate the propagation of fast CMEs and study the thermodynamics of CME-driven shocks. Since the thermal speed of the electrons greatly exceeds the speed of the CME, only protons are directly heated by the shock. Coulomb collisions low in the corona couple the protons and electrons allowing heat exchange between the two species. However, the coupling is so brief that the electrons never achieve more than 10% of the maximum temperature of the protons. We find that heat is able to conduct on open magnetic field lines and rapidly propagates ahead of the CME to form a shock precursor of hot electrons.« less

  14. IUE observations of Si and C lines and comparison with non-LTE models

    NASA Technical Reports Server (NTRS)

    Kamp, L. W.

    1982-01-01

    Classical model atmosphere techniques are applied to analyze IUE spectra, and to determine abundances, effective temperatures and gravities. Measurements of the equivalent widths and other properties of the line profiles of 24 photospheric lines of Si II, Si III, Si IV, C II, C III and C IV are presented in the range of 1175-1725 A for seven B and two O stars. Observed line profiles are compared with theoretical profiles computed using non-LTE theory and models, and using line-blanketed model atmospheres. Agreement is reasonably good, although strong lines are calculated to be systematically stronger than those observed, while the reverse occurs for weak lines, and empirical profiles have smaller wings than theoretical profiles. It is concluded that the present theory of line formation when used with solar abundances, represents fairly well observed UV photospheric lines of silicon and carbon ions in the atmospheres of main sequence stars of types B5-O9.

  15. [Mediaeval anatomic iconography (Part II)].

    PubMed

    Barg, L

    1996-01-01

    In the second part of his paper the author has presented a mediaeval anatomical draft based on empirical studies. From the first drawings from XVth century showing the places of blood-letting and connected with astrological prognostics, to systematical drawings by Guido de Vigevano. He has stressed the parallel existence of two lines of teaching anatomy; one based on philosophical concepts (discussed in the first part of paper), the second one based on empirical concepts. The latter trend has formed the grounds for final transformation, which has taken place in anatomical science in age of Renaissance.

  16. Discovery of C5-C17 poly- and perfluoroalkyl substances in water by in-line SPE-HPLC-Orbitrap with in-source fragmentation flagging.

    PubMed

    Liu, Yanna; Pereira, Alberto Dos Santos; Martin, Jonathan W

    2015-04-21

    The presence of unknown organofluorine compounds in environmental samples has prompted the development of nontargeted analytical methods capable of detecting new perfluoroalkyl and polyfluoroalkyl substances (PFASs). By combining high volume injection with high performance liquid chromatography (HPLC) and ultrahigh resolution Orbitrap mass spectrometry, a sensitive (0.003-0.2 ng F/mL for model mass-labeled PFASs) untargeted workflow was developed for discovery and characterization of novel PFASs in water. In the first step, up to 5 mL of water is injected to in-line solid phase extraction, chromatographed by HPLC, and detected by electrospray ionization with mass spectral acquisition in parallel modes cycling back and forth: (i) full scan with ultrahigh resolving power (RP = 120,000, mass accuracy ≤3 ppm), and (ii) in-source fragmentation flagging scans designed to yield marker fragment ions including [C2F5](-) (m/z 118.992), [C3F7](-) (m/z 168.988), [SO4H](-) (m/z 96.959), and [Cl](-) (m/z 34.9). For flagged PFASs, plausible empirical formulas were generated from accurate masses, isotopic patterns, and fragment ions. In the second step, another injection is made to collect high resolution MS/MS spectra of suspect PFAS ions, allowing further confirmation of empirical formulas while also enabling preliminary structural characterization. The method was validated by applying it to an industrial wastewater, and 36 new PFASs were discovered. Of these, 26 were confidently assigned to 3 new PFAS classes that have not previously been reported in the environment: polyfluorinated sulfates (CnFn+3Hn-2SO4(-); n = 5, 7, 9, 11, 13, and 15), chlorine substituted perfluorocarboxylates (ClCnF2nCO2(-); n = 4-11), and hydro substituted perfluorocarboxylates (HCnF2nCO2(-); n = 5-16). Application of the technique to environmental water samples is now warranted.

  17. XCO2 Retrieval Errors from a PCA-based Approach to Fast Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Somkuti, Peter; Boesch, Hartmut; Natraj, Vijay; Kopparla, Pushkar

    2017-04-01

    Multiple-scattering radiative transfer (RT) calculations are an integral part of forward models used to infer greenhouse gas concentrations in the shortwave-infrared spectral range from satellite missions such as GOSAT or OCO-2. Such calculations are, however, computationally expensive and, combined with the recent growth in data volume, necessitate the use of acceleration methods in order to make retrievals feasible on an operational level. The principle component analysis (PCA)-based approach to fast radiative transfer introduced by Natraj et al. 2005 is a spectral binning method, in which the many line-by-line monochromatic calculations are replaced by a small set of representative ones. From the PCA performed on the optical layer properties for a scene-dependent atmosphere, the results of the representative calculations are mapped onto all spectral points in the given band. Since this RT scheme is an approximation, the computed top-of-atmosphere radiances exhibit errors compared to the "full" line-by-line calculation. These errors ultimately propagate into the final retrieved greenhouse gas concentrations, and their magnitude depends on scene-dependent parameters such as aerosol loadings or viewing geometry. An advantage of this method is the ability to choose the degree of accuracy by increasing or decreasing the number of empirical orthogonal functions used for the reconstruction of the radiances. We have performed a large set of global simulations based on real GOSAT scenes and assess the retrieval errors induced by the fast RT approximation through linear error analysis. We find that across a wide range of geophysical parameters, the errors are for the most part smaller than ± 0.2 ppm and ± 0.06 ppm (out of roughly 400 ppm) for ocean and land scenes respectively. A fast RT scheme that produces low errors is important, since regional biases in XCO2 even in the low sub-ppm range can cause significant changes in carbon fluxes obtained from inversions (Chevallier et al. 2007).

  18. Spectroscopic Observation and Analysis of H II Regions in M33 with MMT: Temperatures and Oxygen Abundances

    NASA Astrophysics Data System (ADS)

    Lin, Zesen; Hu, Ning; Kong, Xu; Gao, Yulong; Zou, Hu; Wang, Enci; Cheng, Fuzhen; Fang, Guanwen; Lin, Lin; Wang, Jing

    2017-06-01

    The spectra of 413 star-forming (or H II) regions in M33 (NGC 598) were observed using the multifiber spectrograph of Hectospec at the 6.5 m Multiple Mirror Telescope. Using this homogeneous spectra sample, we measured the intensities of emission lines and some physical parameters, such as electron temperatures, electron densities, and metallicities. Oxygen abundances were derived via the direct method (when available) and two empirical strong-line methods, namely, O3N2 and N2. At the high-metallicity end, oxygen abundances derived from the O3N2 calibration were higher than those derived from the N2 index, indicating an inconsistency between O3N2 and N2 calibrations. We present a detailed analysis of the spatial distribution of gas-phase oxygen abundances in M33 and confirm the existence of the axisymmetric global metallicity distribution that is widely assumed in the literature. Local variations were also observed and subsequently associated with spiral structures to provide evidence of radial migration driven by arms. Our O/H gradient fitted out to 1.1 R 25 resulted in slopes of -0.17 ± 0.03, -0.19 ± 0.01, and -0.16 ± 0.17 dex {R}25-1, utilizing abundances from O3N2, N2 diagnostics, and a direct method, respectively.

  19. A Comparison of Two Methods for Estimating Black Hole Spin in Active Galactic Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capellupo, Daniel M.; Haggard, Daryl; Wafflard-Fernandez, Gaylor, E-mail: danielc@physics.mcgill.ca

    Angular momentum, or spin, is a fundamental property of black holes (BHs), yet it is much more difficult to estimate than mass or accretion rate (for actively accreting systems). In recent years, high-quality X-ray observations have allowed for detailed measurements of the Fe K α emission line, where relativistic line broadening allows constraints on the spin parameter (the X-ray reflection method). Another technique uses accretion disk models to fit the AGN continuum emission (the continuum-fitting, or CF, method). Although each technique has model-dependent uncertainties, these are the best empirical tools currently available and should be vetted in systems where bothmore » techniques can be applied. A detailed comparison of the two methods is also useful because neither method can be applied to all AGN. The X-ray reflection technique targets mostly local ( z ≲ 0.1) systems, while the CF method can be applied at higher redshift, up to and beyond the peak of AGN activity and growth. Here, we apply the CF method to two AGN with X-ray reflection measurements. For both the high-mass AGN, H1821+643, and the Seyfert 1, NGC 3783, we find a range in spin parameter consistent with the X-ray reflection measurements. However, the near-maximal spin favored by the reflection method for NGC 3783 is more probable if we add a disk wind to the model. Refinement of these techniques, together with improved X-ray measurements and tighter BH mass constraints, will permit this comparison in a larger sample of AGN and increase our confidence in these spin estimation techniques.« less

  20. A combinatorial filtering method for magnetotelluric time-series based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2014-11-01

    Magnetotelluric (MT) time-series are often contaminated with noise from natural or man-made processes. A substantial improvement is possible when the time-series are presented as clean as possible for further processing. A combinatorial method is described for filtering of MT time-series based on the Hilbert-Huang transform that requires a minimum of human intervention and leaves good data sections unchanged. Good data sections are preserved because after empirical mode decomposition the data are analysed through hierarchies, morphological filtering, adaptive threshold and multi-point smoothing, allowing separation of noise from signals. The combinatorial method can be carried out without any assumption about the data distribution. Simulated data and the real measured MT time-series from three different regions, with noise caused by baseline drift, high frequency noise and power-line contribution, are processed to demonstrate the application of the proposed method. Results highlight the ability of the combinatorial method to pick out useful signals, and the noise is suppressed greatly so that their deleterious influence is eliminated for the MT transfer function estimation.

  1. Sources and levels of background noise in the NASA Ames 40- by 80-foot wind tunnel

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.

    1988-01-01

    Background noise levels are measured in the NASA Ames Research Center 40- by 80-Foot Wind Tunnel following installation of a sound-absorbent lining on the test-section walls. Results show that the fan-drive noise dominated the empty test-section background noise at airspeeds below 120 knots. Above 120 knots, the test-section broadband background noise was dominated by wind-induced dipole noise (except at lower harmonics of fan blade-passage tones) most likely generated at the microphone or microphone support strut. Third-octave band and narrow-band spectra are presented for several fan operating conditions and test-section airspeeds. The background noise levels can be reduced by making improvements to the microphone wind screen or support strut. Empirical equations are presented relating variations of fan noise with fan speed or blade-pitch angle. An empirical expression for typical fan noise spectra is also presented. Fan motor electric power consumption is related to the noise generation. Preliminary measurements of sound absorption by the test-section lining indicate that the 152 mm thick lining will adequately absorb test-section model noise at frequencies above 300 Hz.

  2. Empirical determination of low J values of 13CH4 transitions from jet cooled and 80 K cell spectra in the icosad region (7170-7367 cm-1)

    NASA Astrophysics Data System (ADS)

    Votava, O.; Mašát, M.; Pracna, P.; Mondelain, D.; Kassi, S.; Liu, A. W.; Hu, S. M.; Campargue, A.

    2014-12-01

    The absorption spectrum of 13CH4 was recorded at two low temperatures in the icosad region near 1.38 μm, using direct absorption tunable diode lasers. Spectra were obtained using a cryogenic cell cooled at liquid nitrogen temperature (80 K) and a supersonic jet providing a 32 K rotational temperature in the 7173-7367 cm-1 and 7200-7354 cm-1 spectral intervals, respectively. Two lists of 4498 and 339 lines, including absolute line intensities, were constructed from the 80 K and jet spectra, respectively. All the transitions observed in jet conditions were observed at 80 K. From the temperature variation of their line intensities, the corresponding lower state energy values were determined. The 339 derived empirical values of the J rotational quantum number are found close to integer values and are all smaller than 4, as a consequence of the efficient rotational cooling. Six R(0) transitions have been identified providing key information on the origins of the vibrational bands which contribute to the very congested and not yet assigned 13CH4 spectrum in the considered region of the icosad.

  3. Semi-empirical spectrophotometric (SESp) method for the indirect determination of the ratio of cationic micellar binding constants of counterions X⁻ and Br⁻(K(X)/K(Br)).

    PubMed

    Khan, Mohammad Niyaz; Yusof, Nor Saadah Mohd; Razak, Norazizah Abdul

    2013-01-01

    The semi-empirical spectrophotometric (SESp) method, for the indirect determination of ion exchange constants (K(X)(Br)) of ion exchange processes occurring between counterions (X⁻ and Br⁻) at the cationic micellar surface, is described in this article. The method uses an anionic spectrophotometric probe molecule, N-(2-methoxyphenyl)phthalamate ion (1⁻), which measures the effects of varying concentrations of inert inorganic or organic salt (Na(v)X, v = 1, 2) on absorbance, (A(ob)) at 310 nm, of samples containing constant concentrations of 1⁻, NaOH and cationic micelles. The observed data fit satisfactorily to an empirical equation which gives the values of two empirical constants. These empirical constants lead to the determination of K(X)(Br) (= K(X)/K(Br) with K(X) and K(Br) representing cationic micellar binding constants of counterions X and Br⁻). This method gives values of K(X)(Br) for both moderately hydrophobic and hydrophilic X⁻. The values of K(X)(Br), obtained by using this method, are comparable with the corresponding values of K(X)(Br), obtained by the use of semi-empirical kinetic (SEK) method, for different moderately hydrophobic X. The values of K(X)(Br) for X = Cl⁻ and 2,6-Cl₂C6H₃CO₂⁻, obtained by the use of SESp and SEK methods, are similar to those obtained by the use of other different conventional methods.

  4. Mimic expert judgement through automated procedure for selecting rainfall events responsible for shallow landslide: A statistical approach to validation

    NASA Astrophysics Data System (ADS)

    Giovanna, Vessia; Luca, Pisano; Carmela, Vennari; Mauro, Rossi; Mario, Parise

    2016-01-01

    This paper proposes an automated method for the selection of rainfall data (duration, D, and cumulated, E), responsible for shallow landslide initiation. The method mimics an expert person identifying D and E from rainfall records through a manual procedure whose rules are applied according to her/his judgement. The comparison between the two methods is based on 300 D-E pairs drawn from temporal rainfall data series recorded in a 30 days time-lag before the landslide occurrence. Statistical tests, employed on D and E samples considered both paired and independent values to verify whether they belong to the same population, show that the automated procedure is able to replicate the expert pairs drawn by the expert judgment. Furthermore, a criterion based on cumulated distribution functions (CDFs) is proposed to select the most related D-E pairs to the expert one among the 6 drawn from the coded procedure for tracing the empirical rainfall threshold line.

  5. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  6. Flow properties of the solar wind obtained from white light data, Ulysses observations and a two-fluid model

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia Rifai; Esser, Ruth; Guhathakurta, Madhulika; Fisher, Richard

    1995-01-01

    Using the empirical constraints provided by observations in the inner corona and in interplanetary space. we derive the flow properties of the solar wind using a two fluid model. Density and scale height temperatures are derived from White Light coronagraph observations on SPARTAN 201-1 and at Mauna Loa, from 1.16 to 5.5 R, in the two polar coronal holes on 11-12 Apr. 1993. Interplanetary measurements of the flow speed and proton mass flux are taken from the Ulysses south polar passage. By comparing the results of the model computations that fit the empirical constraints in the two coronal hole regions, we show how the effects of the line of sight influence the empirical inferences and subsequently the corresponding numerical results.

  7. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    NASA Astrophysics Data System (ADS)

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.; Church, Sarah E.; Wechsler, Risa H.

    2017-09-01

    Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN-halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for our fiducial model, which is based on current understanding of the galaxy-halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.

  8. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.

    Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for ourmore » fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  9. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.

    Here, line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small formore » our fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  10. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    DOE PAGES

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.; ...

    2017-08-31

    Here, line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small formore » our fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  11. Empirical research in medical ethics: how conceptual accounts on normative-empirical collaboration may improve research practice.

    PubMed

    Salloch, Sabine; Schildmann, Jan; Vollmann, Jochen

    2012-04-13

    The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis.

  12. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    PubMed Central

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  13. Tensile and shear loading of four fcc high-entropy alloys: A first-principles study

    NASA Astrophysics Data System (ADS)

    Li, Xiaoqing; Schönecker, Stephan; Li, Wei; Varga, Lajos K.; Irving, Douglas L.; Vitos, Levente

    2018-03-01

    Ab initio density-functional calculations are used to investigate the response of four face-centered-cubic (fcc) high-entropy alloys (HEAs) to tensile and shear loading. The ideal tensile and shear strengths (ITS and ISS) of the HEAs are studied by employing first-principles alloy theory formulated within the exact muffin-tin orbital method in combination with the coherent-potential approximation. We benchmark the computational accuracy against literature data by studying the ITS under uniaxial [110] tensile loading and the ISS for the [11 2 ¯] (111 ) shear deformation of pure fcc Ni and Al. For the HEAs, we uncover the alloying effect on the ITS and ISS. Under shear loading, relaxation reduces the ISS by ˜50 % for all considered HEAs. We demonstrate that the dimensionless tensile and shear strengths are significantly overestimated by adopting two widely used empirical models in comparison with our ab initio calculations. In addition, our predicted relationship between the dimensionless shear strength and shear instability are in line with the modified Frenkel model. Using the computed ISS, we derive the half-width of the dislocation core for the present HEAs. Employing the ratio of ITS to ISS, we discuss the intrinsic ductility of HEAs and compare it with a common empirical criterion. We observe a strong linear correlation between the shear instability and the ratio of ITS to ISS, whereas a weak positive correlation is found in the case of the empirical criterion.

  14. Analysis of quality control data of eight modern radiotherapy linear accelerators: the short- and long-term behaviours of the outputs and the reproducibility of quality control measurements

    NASA Astrophysics Data System (ADS)

    Kapanen, Mika; Tenhunen, Mikko; Hämäläinen, Tuomo; Sipilä, Petri; Parkkinen, Ritva; Järvinen, Hannu

    2006-07-01

    Quality control (QC) data of radiotherapy linear accelerators, collected by Helsinki University Central Hospital between the years 2000 and 2004, were analysed. The goal was to provide information for the evaluation and elaboration of QC of accelerator outputs and to propose a method for QC data analysis. Short- and long-term drifts in outputs were quantified by fitting empirical mathematical models to the QC measurements. Normally, long-term drifts were well (<=1%) modelled by either a straight line or a single-exponential function. A drift of 2% occurred in 18 ± 12 months. The shortest drift times of only 2-3 months were observed for some new accelerators just after the commissioning but they stabilized during the first 2-3 years. The short-term reproducibility and the long-term stability of local constancy checks, carried out with a sealed plane parallel ion chamber, were also estimated by fitting empirical models to the QC measurements. The reproducibility was 0.2-0.5% depending on the positioning practice of a device. Long-term instabilities of about 0.3%/month were observed for some checking devices. The reproducibility of local absorbed dose measurements was estimated to be about 0.5%. The proposed empirical model fitting of QC data facilitates the recognition of erroneous QC measurements and abnormal output behaviour, caused by malfunctions, offering a tool to improve dose control.

  15. Application of empirical and mechanistic-empirical pavement design procedures to Mn/ROAD concrete pavement test sections

    DOT National Transportation Integrated Search

    1997-05-01

    Current pavement design procedures are based principally on empirical approaches. The current trend toward developing more mechanistic-empirical type pavement design methods led Minnesota to develop the Minnesota Road Research Project (Mn/ROAD), a lo...

  16. Measuring Associations of the Department of Veterans Affairs' Suicide Prevention Campaign on the Use of Crisis Support Services.

    PubMed

    Karras, Elizabeth; Lu, Naiji; Zuo, Guoxin; Tu, Xin M; Stephens, Brady; Draper, John; Thompson, Caitlin; Bossarte, Robert M

    2016-08-01

    Campaigns have become popular in public health approaches to suicide prevention; however, limited empirical investigation of their impact on behavior has been conducted. To address this gap, utilization patterns of crisis support services associated with the Department of Veterans Affairs' Veterans Crisis Line (VCL) suicide prevention campaign were examined. Daily call data for the National Suicide Prevention Lifeline, VCL, and 1-800-SUICIDE were modeled using a novel semi-varying coefficient method. Analyses reveal significant increases in call volume to both targeted and broad resources during the campaign. Findings underscore the need for further research to refine measurement of the effects of these suicide prevention efforts. © 2016 The American Association of Suicidology.

  17. Lande gJ factors for even-parity electronic levels in the holmium atom

    NASA Astrophysics Data System (ADS)

    Stefanska, D.; Werbowy, S.; Krzykowski, A.; Furmann, B.

    2018-05-01

    In this work the hyperfine structure of the Zeeman splitting for 18 even-parity levels in the holmium atom was investigated. The experimental method applied was laser induced fluorescence in a hollow cathode discharge lamp. 20 spectral lines were investigated involving odd-parity levels from the ground multiplet, for which Lande gJ factors are known with high precision, as the lower levels; this greatly facilitated the evaluation of gJ factors for the upper levels. The gJ values for the even-parity levels considered are reported for the first time. They proved to compare fairly well with the values obtained recently in a semi-empirical analysis for the even-parity level system of Ho I.

  18. The Development of New Atmospheric Models for K and M DwarfStars with Exoplanets

    NASA Astrophysics Data System (ADS)

    Linsky, Jeffrey L.

    2018-01-01

    The ultraviolet and X-ray emissions of host stars play critical roles in the survival and chemical composition of the atmospheres of their exoplanets. The need to measure and understand this radiative output, in particular for K and M dwarfs, is the main rationale for computing a new generation of stellar models that includes magnetically heated chromospheres and coronae in addition to their photospheres. We describe our method for computing semi-empirical models that includes solutions of the statistical equilibrium equations for 52 atoms and ions and of the non-LTE radiative transfer equations for all important spectral lines. The code is an offspring of the Solar Radiation Physical Modelling system (SRPM) developed by Fontenla et al. (2007--2015) to compute one-dimensional models in hydrostatic equilibrium to fit high-resolution stellar X-ray to IR spectra. Also included are 20 diatomic molecules and their more than 2 million spectral lines. Our-proof-of-concept model is for the M1.5 V star GJ 832 (Fontenla et al. ApJ 830, 154 (2016)). We will fit the line fluxes and profiles of X-ray lines and continua observed by Chandra and XMM-Newton, UV lines observed by the COS and STIS instruments on HST (N V, C IV, Si IV, Si III, Mg II, C II, and O I), optical lines (including H$\\alpha$, Ca II, Na I), and continua. These models will allow us to compute extreme-UV spectra, which are unobservable but required to predict the hydrodynamic mass-loss rate from exoplanet atmospheres, and to predict panchromatic spectra of new exoplanet host stars discovered after the end of the HST mission.This work is supported by grant HST-GO-15038 from the Space Telescope Science Institute to the Univ. of Colorado

  19. Fight the power: the limits of empiricism and the costs of positivistic rigor.

    PubMed

    Indick, William

    2002-01-01

    A summary of the influence of positivistic philosophy and empiricism on the field of psychology is followed by a critique of the empirical method. The dialectic process is advocated as an alternative method of inquiry. The main advantage of the dialectic method is that it is open to any logical argument, including empirical hypotheses, but unlike empiricism, it does not automatically reject arguments that are not based on observable data. Evolutionary and moral psychology are discussed as examples of important fields of study that could benefit from types of arguments that frequently do not conform to the empirical standards of systematic observation and falsifiability of hypotheses. A dialectic method is shown to be a suitable perspective for those fields of research, because it allows for logical arguments that are not empirical and because it fosters a functionalist perspective, which is indispensable for both evolutionary and moral theories. It is suggested that all psychologists may gain from adopting a dialectic approach, rather than restricting themselves to empirical arguments alone.

  20. The Use of Empirical Methods for Testing Granular Materials in Analogue Modelling

    PubMed Central

    Montanari, Domenico; Agostini, Andrea; Bonini, Marco; Corti, Giacomo; Del Ventisette, Chiara

    2017-01-01

    The behaviour of a granular material is mainly dependent on its frictional properties, angle of internal friction, and cohesion, which, together with material density, are the key factors to be considered during the scaling procedure of analogue models. The frictional properties of a granular material are usually investigated by means of technical instruments such as a Hubbert-type apparatus and ring shear testers, which allow for investigating the response of the tested material to a wide range of applied stresses. Here we explore the possibility to determine material properties by means of different empirical methods applied to mixtures of quartz and K-feldspar sand. Empirical methods exhibit the great advantage of measuring the properties of a certain analogue material under the experimental conditions, which are strongly sensitive to the handling techniques. Finally, the results obtained from the empirical methods have been compared with ring shear tests carried out on the same materials, which show a satisfactory agreement with those determined empirically. PMID:28772993

  1. Interpreting Methanol v(sub 2)-Band Emission in Comets Using Empirical Fluorescence g-Factors

    NASA Technical Reports Server (NTRS)

    DiSanti, Michael; Villanueva, G. L.; Bonev, B. P.; Mumma, M. J.; Paganini, L.; Gibb, E. L.; Magee-Sauer, K.

    2011-01-01

    For many years we have been developing the ability, through high-resolution spectroscopy targeting ro-vibrational emission in the approximately 3 - 5 micrometer region, to quantify a suite of (approximately 10) parent volatiles in comets using quantum mechanical fluorescence models. Our efforts are ongoing and our latest includes methanol (CH3OH). This is unique among traditionally targeted species in having lacked sufficiently robust models for its symmetric (v(sub 3) band) and asymmetric (v(sub 2) and v(sub 9) bands) C-H3 stretching modes, required to provide accurate predicted intensities for individual spectral lines and hence rotational temperatures and production rates. This has provided the driver for undertaking a detailed empirical study of line intensities, and has led to substantial progress regarding our ability to interpret CH3OH in comets. The present study concentrates on the spectral region from approximately 2970 - 3010 per centimeter (3.367 - 3.322 micrometer), which is dominated by emission in the (v(sub 7) band of C2H6 and the v(sub 2) band of CH3OH, with minor contributions from CH3OH (v(sub 9) band), CH4 (v(sub 3)), and OH prompt emissions (v(sub 1) and v(sub 2)- v(sub 1)). Based on laboratory jet-cooled spectra (at a rotational temperature near 20 K)[1], we incorporated approximately 100 lines of the CH3OH v(sub 2) band, having known frequencies and lower state rotational energies, into our model. Line intensities were determined through comparison with several comets we observed with NIRSPEC at Keck 2, after removal of continuum and additional molecular emissions and correcting for atmospheric extinction. In addition to the above spectral region, NIRSPEC allows simultaneous sampling of the CH3OH v(sub 3) band (centered at 2844 per centimeter, or 3.516 micrometers and several hot bands of H2O in the approximately 2.85 - 2.9 micrometer region, at a nominal spectral resolving power of approximately 25,000 [2]. Empirical g-factors for v(sub 2) lines were based on the production rate as determined from the v(sub 3) Q-branch intensity; application to comets spanning a range of rotational temperatures (approximately 50 - 90 K) will be reported. This work represents an extension of that presented for comet 21P/Giacobini-Zinner at the 2010 Division for Planetary Sciences meeting [3]. Our empirical study also allows for quantifying CH3OH in comets using IR spectrometers for which the v(sub 3) and v(sub 2) bands are not sampled simultaneously, for example CSHELL/NASA IRTF or CRIRES/VLT.

  2. Empirical deck for phased construction and widening [summary].

    DOT National Transportation Integrated Search

    2017-06-01

    The most common method used to design and analyze bridge decks, termed the traditional : method, treats a deck slab as if it were made of strips supported by inflexible girders. An : alternative the empirical method treats the deck slab as a ...

  3. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  4. Primordial 4He abundance: a determination based on the largest sample of H II regions with a methodology tested on model H II regions

    NASA Astrophysics Data System (ADS)

    Izotov, Y. I.; Stasińska, G.; Guseva, N. G.

    2013-10-01

    We verified the validity of the empirical method to derive the 4He abundance used in our previous papers by applying it to CLOUDY (v13.01) models. Using newly published He i emissivities for which we present convenient fits as well as the output CLOUDY case B hydrogen and He i line intensities, we found that the empirical method is able to reproduce the input CLOUDY 4He abundance with an accuracy of better than 1%. The CLOUDY output data also allowed us to derive the non-recombination contribution to the intensities of the strongest Balmer hydrogen Hα, Hβ, Hγ, and Hδ emission lines and the ionisation correction factors for He. With these improvements we used our updated empirical method to derive the 4He abundances and to test corrections for several systematic effects in a sample of 1610 spectra of low-metallicity extragalactic H ii regions, the largest sample used so far. From this sample we extracted a subsample of 111 H ii regions with Hβ equivalent width EW(Hβ) ≥ 150 Å, with excitation parameter x = O2+/O ≥ 0.8, and with helium mass fraction Y derived with an accuracy better than 3%. With this subsample we derived the primordial 4He mass fraction Yp = 0.254 ± 0.003 from linear regression Y - O/H. The derived value of Yp is higher at the 68% confidence level (CL) than that predicted by the standard big bang nucleosynthesis (SBBN) model, possibly implying the existence of different types of neutrino species in addition to the three known types of active neutrinos. Using the most recently derived primordial abundances D/H = (2.60 ± 0.12) × 10-5 and Yp = 0.254 ± 0.003 and the χ2 technique, we found that the best agreement between abundances of these light elements is achieved in a cosmological model with baryon mass density Ωbh2 = 0.0234 ± 0.0019 (68% CL) and an effective number of the neutrino species Neff = 3.51 ± 0.35 (68% CL). Based on observations collected at the European Southern Observatory, Chile, programs 073.B-0283(A), 081.C-0113(A), 65.N-0642(A), 68.B-0310(A), 69.C-0203(A), 69.D-0174(A), 70.B-0717(A), 70.C-0008(A), 71.B-0055(A).Based on observations at the Kitt Peak National Observatory, National Optical Astronomical Observatory, operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.Tables 2 and 3 are available in electronic form at http://www.aanda.org

  5. A cost analysis of a broad-spectrum antibiotic therapy in the empirical treatment of health care-associated infections in cirrhotic patients

    PubMed Central

    Lucidi, Cristina; Di Gregorio, Vincenza; Ceccarelli, Giancarlo; Venditti, Mario; Riggio, Oliviero; Merli, Manuela

    2017-01-01

    Background Early diagnosis and appropriate treatment of infections in cirrhosis are crucial. As new guidelines in this context, particularly for health care-associated (HCA) infections, would be needed, we performed a trial documenting whether an empirical broad-spectrum antibiotic therapy is more effective than the standard one for these infections. Because of the higher daily cost of broad-spectrum than standard antibiotics, we performed a cost analysis to compare: 1) total drug costs, 2) profitability of hospital admissions. Methods This retrospective observational analysis was performed on patients enrolled in the trial NCT01820026, in which consecutive cirrhotic patients with HCA infections were randomly assigned to a standard vs a broad-spectrum treatment. Antibiotic daily doses, days of treatment, length of hospital stay, and DRG (diagnosis-related group) were recorded from the clinical trial medical records. The profitability of hospitalizations was calculated considering DRG tariffs divided by length of hospital stay. Results We considered 84 patients (42 for each group). The standard therapy allowed to obtain a first-line treatment cost lower than in the broad-spectrum therapy. Anyway, the latter, being related to a lower failure rate (19% vs 57.1%), resulted in cost saving in terms of cumulative antibiotic costs (first- and second-line treatments). The mean cost saving per patient for the broad-spectrum arm was €44.18 (−37.6%), with a total cost saving of about €2,000. Compared to standard group, we observed a statistically significant reduction in hospital stay from 17.8 to 11.8 days (p<0.002) for patients treated with broad-spectrum antibiotics. The distribution of DRG tariffs was similar in the two groups. According to DRG, the shorter length of hospital stay of the broad-spectrum group involved a higher mean profitable daily cost than standard group (€345.61 vs €252.23; +37%). Conclusion Our study supports the idea that the use of a broad-spectrum empirical treatment for HCA infections in cirrhosis would be cost-saving and that hospitals need to be aware of the clinical and economic consequences of a wrong antibiotic treatment in this setting. PMID:28721080

  6. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  7. Correlation between the line width and the line flux of the double-peaked broad Hα of 3C390.3

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Guang

    2013-03-01

    In this paper, we carefully check the correlation between the line width (second moment) and the line flux of the double-peaked broad Hα of the well-known mapped active galactic nucleus (AGN) 3C390.3 in order to show some further distinctions between double-peaked emitters and normal broad-line AGN. Based on the virialization assumption MBH ∝ RBLR × V2(BLR) and the empirical relation RBLR ∝ L˜0.5, one strong negative correlation between the line width and the line flux of the double-peaked broad lines should be expected for 3C390.3, such as the negative correlation confirmed for the mapped broad-line object NGC 5548, RBLR × V2(BLR) ∝ L˜0.5 × σ2 = constant. Moreover, based on the public spectra around 1995 from the AGN WATCH project for 3C390.3, one reliable positive correlation is found between the line width and the line flux of the double-peaked broad Hα. In the context of the proposed theoretical accretion disc model for double-peaked emitters, the unexpected positive correlation can be naturally explained, due to different time delays for the inner and outer parts of the disc-like broad-line region (BLR) of 3C390.3. Moreover, the virialization assumption is checked and found to be still available for 3C390.3. However, the time-varying size of the BLR of 3C390.3 cannot be expected by the empirical relation RBLR ∝ L˜0.5. In other words, the mean size of the BLR of 3C390.3 can be estimated by the continuum luminosity (line luminosity), while the continuum emission strengthening leads to the size of BLR decreasing (not increasing) in different moments for 3C390.3. Then, we compared our results of 3C390.3 with the previous results reported in the literature for the other double-peaked emitters, and found that before to clearly correct the effects from disc physical parameters varying (such as the effects of disc precession) for long-term observed line spectra, it is not so meaningful to discuss the correlation of the line parameters of double-peaked broad lines. Furthermore, due to the probable `external' ionizing source with so far unclear structures, it is hard to give one conclusion that the positive correlation between the line width and the line flux can be found for all double-peaked emitters, even after the considerations of disc physical parameters varying. However, once one positive correlation of broad-line parameters is found, the accretion disc origination of the broad line should be considered first.

  8. Pancreatic cancer cell lines as patient-derived avatars: genetic characterisation and functional utility.

    PubMed

    Knudsen, Erik S; Balaji, Uthra; Mannakee, Brian; Vail, Paris; Eslinger, Cody; Moxom, Christopher; Mansour, John; Witkiewicz, Agnieszka K

    2018-03-01

    Pancreatic ductal adenocarcinoma (PDAC) is a therapy recalcitrant disease with the worst survival rate of common solid tumours. Preclinical models that accurately reflect the genetic and biological diversity of PDAC will be important for delineating features of tumour biology and therapeutic vulnerabilities. 27 primary PDAC tumours were employed for genetic analysis and development of tumour models. Tumour tissue was used for derivation of xenografts and cell lines. Exome sequencing was performed on the originating tumour and developed models. RNA sequencing, histological and functional analyses were employed to determine the relationship of the patient-derived models to clinical presentation of PDAC. The cohort employed captured the genetic diversity of PDAC. From most cases, both cell lines and xenograft models were developed. Exome sequencing confirmed preservation of the primary tumour mutations in developed cell lines, which remained stable with extended passaging. The level of genetic conservation in the cell lines was comparable to that observed with patient-derived xenograft (PDX) models. Unlike historically established PDAC cancer cell lines, patient-derived models recapitulated the histological architecture of the primary tumour and exhibited metastatic spread similar to that observed clinically. Detailed genetic analyses of tumours and derived models revealed features of ex vivo evolution and the clonal architecture of PDAC. Functional analysis was used to elucidate therapeutic vulnerabilities of relevance to treatment of PDAC. These data illustrate that with the appropriate methods it is possible to develop cell lines that maintain genetic features of PDAC. Such models serve as important substrates for analysing the significance of genetic variants and create a unique biorepository of annotated cell lines and xenografts that were established simultaneously from same primary tumour. These models can be used to infer genetic and empirically determined therapeutic sensitivities that would be germane to the patient. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. 40 CFR Appendix C to Part 75 - Missing Data Estimation Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certification of a parametric, empirical, or process simulation method or model for calculating substitute data... available process simulation methods and models. 1.2Petition Requirements Continuously monitor, determine... desulfurization, a corresponding empirical correlation or process simulation parametric method using appropriate...

  10. Occasional Papers in Open and Distance Learning, Number 18.

    ERIC Educational Resources Information Center

    Donnan, Peter, Ed.

    Six papers examine innovations and trends in distance learning, frequently drawing upon empirical research or informal observations on distance learning students at Charles Sturt University (Australia). "On-Line Study Packages for Distance Education: Some Considerations of Conceptual Parameters" (Dirk M. R. Spennemann) discusses issues…

  11. Empirical-theoretical Survey of the Variety of Peculiarities and Anomalies in the Atmospheres Enveloping Actual Stars

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Phenomena observed in actual stellar atmospheres which contradict the speculative, standard thermal atmospheric model are discussed. Examples of stellar variability, emission line peculiarity, symbiotic stars and phenomena, extended atmosphere stars, superionization, and superthermic velocity are examined.

  12. Virtual Museum Learning

    ERIC Educational Resources Information Center

    Prosser, Dominic; Eddisford, Susan

    2004-01-01

    This paper examines children's and adults' attitudes to virtual representations of museum objects. Drawing on empirical research data gained from two web-based digital learning environments. The paper explores the characteristics of on-line learning activities that move children from a sense of wonder into meaningful engagement with objects and…

  13. An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers

    USGS Publications Warehouse

    Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.

    2016-01-01

    Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  14. A User''s Guide to the Zwikker-Kosten Transmission Line Code (ZKTL)

    NASA Technical Reports Server (NTRS)

    Kelly, J. J.; Abu-Khajeel, H.

    1997-01-01

    This user's guide documents updates to the Zwikker-Kosten Transmission Line Code (ZKTL). This code was developed for analyzing new liner concepts developed to provide increased sound absorption. Contiguous arrays of multi-degree-of-freedom (MDOF) liner elements serve as the model for these liner configurations, and Zwikker and Kosten's theory of sound propagation in channels is used to predict the surface impedance. Transmission matrices for the various liner elements incorporate both analytical and semi-empirical methods. This allows standard matrix techniques to be employed in the code to systematically calculate the composite impedance due to the individual liner elements. The ZKTL code consists of four independent subroutines: 1. Single channel impedance calculation - linear version (SCIC) 2. Single channel impedance calculation - nonlinear version (SCICNL) 3. Multi-channel, multi-segment, multi-layer impedance calculation - linear version (MCMSML) 4. Multi-channel, multi-segment, multi-layer impedance calculation - nonlinear version (MCMSMLNL) Detailed examples, comments, and explanations for each liner impedance computation module are included. Also contained in the guide are depictions of the interactive execution, input files and output files.

  15. Exploring the Dimensionality of Ethnic Minority Adaptation in Britain: An Analysis across Ethnic and Generational Lines

    PubMed Central

    Lessard-Phillips, Laurence

    2015-01-01

    In this article I explore the dimensionality of the long-term experiences of the main ethnic minority groups (their adaptation) in Britain. Using recent British data, I apply factor analysis to uncover the underlying number of factors behind variables deemed to be representative of the adaptation experience within the literature. I then attempt to assess the groupings of adaptation present in the data, to see whether a typology of adaptation exists (i.e. whether adaptation in different dimensions can be concomitant with others). The analyses provide an empirical evidence base to reflect on: (1) the extent of group differences in the adaptation process, which may cut across ethnic and generational lines; and (2) whether the uncovered dimensions of adaptation match existing theoretical views and empirical evidence. Results suggest that adaptation should be regarded as a multi-dimensional phenomenon where clear typologies of adaptation based on specific trade-offs (mostly cultural) appear to exist. PMID:28502998

  16. Laser marking as a result of applying reverse engineering

    NASA Astrophysics Data System (ADS)

    Mihalache, Andrei; Nagîţ, Gheorghe; Rîpanu, Marius Ionuţ; Slǎtineanu, Laurenţiu; Dodun, Oana; Coteaţǎ, Margareta

    2018-05-01

    The elaboration of a modern manufacturing technology needs a certain quantum of information concerning the part to be obtained. When it is necessary to elaborate the technology for an existing object, such an information could be ensured by using the principles specific to the reverse engineering. Essentially, in the case of this method, the analysis of the surfaces and of other characteristics of the part must offer enough information for the elaboration of the part manufacturing technology. On the other hand, it is known that the laser marking is a processing method able to ensure the transfer of various inscriptions or drawings on a part. Sometimes, the laser marking could be based on the analysis of an existing object, whose image could be used to generate the same object or an improved object. There are many groups of factors able to affect the results of applying the laser marking process. A theoretical analysis was proposed to show that the heights of triangles obtained by means of a CNC marking equipment depend on the width of the line generated by the laser spot on the workpiece surface. An experimental research was thought and materialized to highlight the influence exerted by the line with and the angle of lines intersections on the accuracy of the marking process. By mathematical processing of the experimental results, empirical mathematical models were determined. The power type model and the graphical representation elaborated on the base of this model offered an image concerning the influences exerted by the considered input factors on the marking process accuracy.

  17. EMPIRICALLY ESTIMATED FAR-UV EXTINCTION CURVES FOR CLASSICAL T TAURI STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, Matthew; France, Kevin; Schindhelm, Eric

    Measurements of extinction curves toward young stars are essential for calculating the intrinsic stellar spectrophotometric radiation. This flux determines the chemical properties and evolution of the circumstellar region, including the environment in which planets form. We develop a new technique using H{sub 2} emission lines pumped by stellar Ly α photons to characterize the extinction curve by comparing the measured far-ultraviolet H{sub 2} line fluxes with model H{sub 2} line fluxes. The difference between model and observed fluxes can be attributed to the dust attenuation along the line of sight through both the interstellar and circumstellar material. The extinction curvesmore » are fit by a Cardelli et al. (1989) model and the A {sub V} (H{sub 2}) for the 10 targets studied with good extinction fits range from 0.5 to 1.5 mag, with R {sub V} values ranging from 2.0 to 4.7. A {sub V} and R {sub V} are found to be highly degenerate, suggesting that one or the other needs to be calculated independently. Column densities and temperatures for the fluorescent H{sub 2} populations are also determined, with averages of log{sub 10}( N (H{sub 2})) = 19.0 and T = 1500 K. This paper explores the strengths and limitations of the newly developed extinction curve technique in order to assess the reliability of the results and improve the method in the future.« less

  18. Accurate Theoretical Methane Line Lists in the Infrared up to 3000 K and Quasi-continuum Absorption/Emission Modeling for Astrophysical Applications

    NASA Astrophysics Data System (ADS)

    Rey, Michael; Nikitin, Andrei V.; Tyuterev, Vladimir G.

    2017-10-01

    Modeling atmospheres of hot exoplanets and brown dwarfs requires high-T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of 12CH4 in the infrared range 0-13,400 cm-1 up to T max = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm-1 and intensity cutoff down to 10-33 cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line position accuracies of 0.001-0.01 cm-1. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high-T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.

  19. Simplified Model to Predict Deflection and Natural Frequency of Steel Pole Structures

    NASA Astrophysics Data System (ADS)

    Balagopal, R.; Prasad Rao, N.; Rokade, R. P.

    2018-04-01

    Steel pole structures are suitable alternate to transmission line towers, due to difficulty encountered in finding land for the new right of way for installation of new lattice towers. The steel poles have tapered cross section and they are generally used for communication, power transmission and lighting purposes. Determination of deflection of steel pole is important to decide its functionality requirement. The excessive deflection of pole may affect the signal attenuation and short circuiting problems in communication/transmission poles. In this paper, a simplified method is proposed to determine both primary and secondary deflection based on dummy unit load/moment method. The predicted deflection from proposed method is validated with full scale experimental investigation conducted on 8 m and 30 m high lighting mast, 132 and 400 kV transmission pole and found to be in close agreement with each other. Determination of natural frequency is an important criterion to examine its dynamic sensitivity. A simplified semi-empirical method using the static deflection from the proposed method is formulated to determine its natural frequency. The natural frequency predicted from proposed method is validated with FE analysis results. Further the predicted results are validated with experimental results available in literature.

  20. Benefits of peer support groups in the treatment of addiction

    PubMed Central

    Tracy, Kathlene; Wallace, Samantha P

    2016-01-01

    Objective Peer support can be defined as the process of giving and receiving nonprofessional, nonclinical assistance from individuals with similar conditions or circumstances to achieve long-term recovery from psychiatric, alcohol, and/or other drug-related problems. Recently, there has been a dramatic rise in the adoption of alternative forms of peer support services to assist recovery from substance use disorders; however, often peer support has not been separated out as a formalized intervention component and rigorously empirically tested, making it difficult to determine its effects. This article reports the results of a literature review that was undertaken to assess the effects of peer support groups, one aspect of peer support services, in the treatment of addiction. Methods The authors of this article searched electronic databases of relevant peer-reviewed research literature including PubMed and MedLINE. Results Ten studies met our minimum inclusion criteria, including randomized controlled trials or pre-/post-data studies, adult participants, inclusion of group format, substance use-related, and US-conducted studies published in 1999 or later. Studies demonstrated associated benefits in the following areas: 1) substance use, 2) treatment engagement, 3) human immunodeficiency virus/hepatitis C virus risk behaviors, and 4) secondary substance-related behaviors such as craving and self-efficacy. Limitations were noted on the relative lack of rigorously tested empirical studies within the literature and inability to disentangle the effects of the group treatment that is often included as a component of other services. Conclusion Peer support groups included in addiction treatment shows much promise; however, the limited data relevant to this topic diminish the ability to draw definitive conclusions. More rigorous research is needed in this area to further expand on this important line of research. PMID:27729825

  1. Experimental and computational analysis of sound absorption behavior in needled nonwovens

    NASA Astrophysics Data System (ADS)

    Soltani, Parham; Azimian, Mehdi; Wiegmann, Andreas; Zarrebini, Mohammad

    2018-07-01

    In this paper application of X-ray micro-computed tomography (μCT) together with fluid simulation techniques to predict sound absorption characteristics of needled nonwovens is discussed. Melt-spun polypropylene fibers of different fineness were made on an industrial scale compact melt spinning line. A conventional batt forming-needling line was used to prepare the needled samples. The normal incidence sound absorption coefficients were measured using impedance tube method. Realistic 3D images of samples at micron-level spatial resolution were obtained using μCT. Morphology of fabrics was characterized in terms of porosity, fiber diameter distribution, fiber curliness and pore size distribution from high-resolution realistic 3D images using GeoDict software. In order to calculate permeability and flow resistivity of media, fluid flow was simulated by numerically solving incompressible laminar Newtonian flow through the 3D pore space of realistic structures. Based on the flow resistivity, the frequency-dependent acoustic absorption coefficient of the needled nonwovens was predicted using the empirical model of Delany and Bazley (1970) and its associated modified models. The results were compared and validated with the corresponding experimental results. Based on morphological analysis, it was concluded that for a given weight per unit area, finer fibers yield to presence of higher number of fibers in the samples. This results in formation of smaller and more tortuous pores, which in turn leads to increase in flow resistivity of media. It was established that, among the empirical models, Mechel modification to Delany and Bazley model had superior predictive ability when compared to that of the original Delany and Bazley model at frequency range of 100-5000 Hz and is well suited to polypropylene needled nonwovens.

  2. A High Resolution Genome-Wide Scan for Significant Selective Sweeps: An Application to Pooled Sequence Data in Laying Chickens

    PubMed Central

    Qanbari, Saber; Strom, Tim M.; Haberer, Georg; Weigend, Steffen; Gheyas, Almas A.; Turner, Frances; Burt, David W.; Preisinger, Rudolf; Gianola, Daniel; Simianer, Henner

    2012-01-01

    In most studies aimed at localizing footprints of past selection, outliers at tails of the empirical distribution of a given test statistic are assumed to reflect locus-specific selective forces. Significance cutoffs are subjectively determined, rather than being related to a clear set of hypotheses. Here, we define an empirical p-value for the summary statistic by means of a permutation method that uses the observed SNP structure in the real data. To illustrate the methodology, we applied our approach to a panel of 2.9 million autosomal SNPs identified from re-sequencing a pool of 15 individuals from a brown egg layer line. We scanned the genome for local reductions in heterozygosity, suggestive of selective sweeps. We also employed a modified sliding window approach that accounts for gaps in the sequence and increases scanning resolution by moving the overlapping windows by steps of one SNP only, and suggest to call this a “creeping window” strategy. The approach confirmed selective sweeps in the region of previously described candidate genes, i.e. TSHR, PRL, PRLHR, INSR, LEPR, IGF1, and NRAMP1 when used as positive controls. The genome scan revealed 82 distinct regions with strong evidence of selection (genome-wide p-value<0.001), including genes known to be associated with eggshell structure and immune system such as CALB1 and GAL cluster, respectively. A substantial proportion of signals was found in poor gene content regions including the most extreme signal on chromosome 1. The observation of multiple signals in a highly selected layer line of chicken is consistent with the hypothesis that egg production is a complex trait controlled by many genes. PMID:23209582

  3. H2-,He-and CO2-line broadening coefficients and pressure shifts for the HITRAN database

    NASA Astrophysics Data System (ADS)

    Wilzewski, Jonas; Gordon, Iouli E.; Rothman, Laurence S.

    2014-06-01

    To increase the potential of the HITRAN database in astronomy, experimental and theoretical line broadening coefficients and line shifts of molecules of planetary interest broadened by H2,He,and CO2 have been assembled from available peer-reviewed sources. Since H2 and He are major constituents in the atmospheres of gas giants, and CO2 predominates in atmospheres of some rocky planets with volcanic activity, these spectroscopic data are important for studying planetary atmospheres. The collected data were used to create semi-empirical models for complete data sets from the microwave to the UV part of the spectrum of the studied molecules. The presented work will help identify the need for further investigations of broadening and shifting of spectral lines.

  4. Shape modeling with family of Pearson distributions: Langmuir waves

    NASA Astrophysics Data System (ADS)

    Vidojevic, Sonja

    2014-10-01

    Two major effects of Langmuir wave electric field influence on spectral line shapes are appearance of depressions shifted from unperturbed line and an additional dynamical line broadening. More realistic and accurate models of Langmuir waves are needed to study these effects with more confidence. In this article we present distribution shapes of a high-quality data set of Langmuir waves electric field observed by the WIND satellite. Using well developed numerical techniques, the distributions of the empirical measurements are modeled by family of Pearson distributions. The results suggest that the existing theoretical models of energy conversion between an electron beam and surrounding plasma is more complex. If the processes of the Langmuir wave generation are better understood, the influence of Langmuir waves on spectral line shapes could be modeled better.

  5. On the altitude-variation of electron acceleration by HF radio-waves in the F-region

    NASA Astrophysics Data System (ADS)

    Gustavsson, Bjorn

    2016-07-01

    I will talk about artificial aurora, the descending layers we have observed at HAARP and the altitude-variations we have observed in enhanced ion and plasma-lines with the EISCAT UHF-radar, and present an empirical model describing these phenomena.

  6. The Zeeman Effect in the 44 GHz Class I Methanol Maser Line toward DR21(OH)

    NASA Astrophysics Data System (ADS)

    Momjian, E.; Sarma, A. P.

    2017-01-01

    We report detection of the Zeeman effect in the 44 GHz Class I methanol maser line, toward the star-forming region DR21(OH). In a 219 Jy beam-1 maser centered at an LSR velocity of 0.83 km s-1, we find a 20-σ detection of zBlos = 53.5 ± 2.7 Hz. If 44 GHz methanol masers are excited at n ˜ 107-8 cm-3, then the B versus n1/2 relation would imply, from comparison with Zeeman effect detections in the CN(1 - 0) line toward DR21(OH), that magnetic fields traced by 44 GHz methanol masers in DR21(OH) should be ˜10 mG. Combined with our detected zBlos = 53.5 Hz, this would imply that the value of the 44 GHz methanol Zeeman splitting factor z is ˜5 Hz mG-1. Such small values of z would not be a surprise, as the methanol molecule is non-paramagnetic, like H2O. Empirical attempts to determine z, as demonstrated, are important because there currently are no laboratory measurements or theoretically calculated values of z for the 44 GHz CH3OH transition. Data from observations of a larger number of sources are needed to make such empirical determinations robust.

  7. Resistance Elasticity of Antibiotic Demand in Intensive Care.

    PubMed

    Heister, Thomas; Hagist, Christian; Kaier, Klaus

    2017-07-01

    The emergence and spread of antimicrobial resistance (AMR) is still an unresolved problem worldwide. In intensive care units (ICUs), first-line antibiotic therapy is highly standardized and widely empiric while treatment failure because of AMR often has severe consequences. Simultaneously, there is a limited number of reserve antibiotics, whose prices and/or side effects are substantially higher than first-line therapy. This paper explores the implications of resistance-induced substitution effects in ICUs. The extent of such substitution effects is shown in a dynamic fixed effect regression analysis using a panel of 66 German ICUs with monthly antibiotic use and resistance data between 2001 and 2012. Our findings support the hypothesis that demand for reserve antibiotics substantially increases when resistance towards first-line agents rises. For some analyses the lagged effect of resistance is also significant, supporting the conjecture that part of the substitution effect is caused by physicians changing antibiotic choices in empiric treatment by adapting their resistance expectation to new information on resistance prevalence. The available information about resistance rates allows physicians to efficiently balance the trade-off between exacerbating resistance and ensuring treatment success. However, resistance-induced substitution effects are not free of charge. These effects should be considered an indirect burden of AMR. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. ExoMol line lists XXVIII: The rovibronic spectrum of AlH

    NASA Astrophysics Data System (ADS)

    Yurchenko, Sergei N.; Williams, Henry; Leyland, Paul C.; Lodi, Lorenzo; Tennyson, Jonathan

    2018-06-01

    A new line list for AlH is produced. The WYLLoT line list spans two electronic states X 1Σ+ and A 1Π. A diabatic model is used to model the shallow potential energy curve of the A 1Π state, which has a strong pre-dissociative character with only two bound vibrational states. Both potential energy curves are empirical and were obtained by fitting to experimentally derived energies of the X 1Σ+ and A 1Π electronic states using the diatomic nuclear motion codes DPOTFIT and DUO. High temperature line lists plus partition functions and lifetimes for three isotopologues 27AlH, 27AlD and 26AlH were generated using ab initio dipole moments. The line lists cover both the X-X and A-X systems and are made available in electronic form at the CDS and ExoMol databases.

  9. An atlas of synthetic line profiles of Planetary Nebulae

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Stasinska, G.

    2008-04-01

    We have constructed a grid of photoionization models of spherical, elliptical and bipolar planetary nebulae. Assuming different velocity fields, we have computed line profiles corresponding to different orientations, slit sizes and positions. The atlas is meant both for didactic purposes and for the interpretation of data on real nebulae. As an application, we have shown that line profiles are often degenerate, and that recovering the geometry and velocity field from observations requires lines from ions with different masses and different ionization potentials. We have also shown that the empirical way to measure mass-weighted expansion velocities from observed line widths is reasonably accurate if considering the HWHM. For distant nebulae, entirely covered by the slit, the unknown geometry and orientation do not alter the measured velocities statistically. The atlas is freely accessible from internet. The Cloudy_3D suite and the associated VISNEB tool are available on request.

  10. Internal Variations in Empirical Oxygen Abundances for Giant H II Regions in the Galaxy NGC 2403

    NASA Astrophysics Data System (ADS)

    Mao, Ye-Wei; Lin, Lin; Kong, Xu

    2018-02-01

    This paper presents a spectroscopic investigation of 11 {{H}} {{II}} regions in the nearby galaxy NGC 2403. The {{H}} {{II}} regions are observed with a long-slit spectrograph mounted on the 2.16 m telescope at XingLong station of National Astronomical Observatories of China. For each of the {{H}} {{II}} regions, spectra are extracted at different nebular radii along the slit-coverage. Oxygen abundances are empirically estimated from the strong-line indices R23, N2O2, O3N2, and N2 for each spectrophotometric unit, with both observation- and model-based calibrations adopted into the derivation. Radial profiles of these diversely estimated abundances are drawn for each nebula. In the results, the oxygen abundances separately estimated with the prescriptions on the basis of observations and models, albeit from the same spectral index, systematically deviate from each other; at the same time, the spectral indices R23 and N2O2 are distributed with flat profiles, whereas N2 and O3N2 exhibit apparent gradients with the nebular radius. Because our study naturally samples various ionization levels, which inherently decline at larger radii within individual {{H}} {{II}} regions, the radial distributions indicate not only the robustness of R23 and N2O2 against ionization variations but also the sensitivity of N2 and O3N2 to the ionization parameter. The results in this paper provide observational corroboration of the theoretical prediction about the deviation in the empirical abundance diagnostics. Our future work is planned to investigate metal-poor {{H}} {{II}} regions with measurable T e, in an attempt to recalibrate the strong-line indices and consequently disclose the cause of the discrepancies between the empirical oxygen abundances.

  11. Cough: are children really different to adults?

    PubMed Central

    Chang, Anne B

    2005-01-01

    Worldwide paediatricians advocate that children should be managed differently from adults. In this article, similarities and differences between children and adults related to cough are presented. Physiologically, the cough pathway is closely linked to the control of breathing (the central respiratory pattern generator). As respiratory control and associated reflexes undergo a maturation process, it is expected that the cough would likewise undergo developmental stages as well. Clinically, the 'big three' causes of chronic cough in adults (asthma, post-nasal drip and gastroesophageal reflux) are far less common causes of chronic cough in children. This has been repeatedly shown by different groups in both clinical and epidemiological studies. Therapeutically, some medications used empirically for cough in adults have little role in paediatrics. For example, anti-histamines (in particular H1 antagonists) recommended as a front-line empirical treatment of chronic cough in adults have no effect in paediatric cough. Instead it is associated with adverse reactions and toxicity. Similarly, codeine and its derivatives used widely for cough in adults are not efficacious in children and are contraindicated in young children. Corticosteroids, the other front-line empirical therapy recommended for adults, are also minimally (if at all) efficacious for treating non-specific cough in children. In summary, current data support that management guidelines for paediatric cough should be different to those in adults as the aetiological factors and treatment in children significantly differ to those in adults. PMID:16270937

  12. Collecting the Missing Piece of the Puzzle: The Wind Temperatures of Arcturus (K2 III) and Aldeberan (K5 III)

    NASA Astrophysics Data System (ADS)

    Harper, Graham

    2017-08-01

    Unravelling the poorly understood processes that drive mass loss from red giant stars requires that we empirically constrain the intimately coupled momentum and energy balance. Hubble high spectral resolution observations of wind scattered line profiles, from neutral and singly ionized species, have provided measures of wind acceleration, turbulence, terminal speeds, and mass-loss rates. These wind properties inform us about the force-momentum balance, however, the spectra have not yielded measures of the much needed wind temperatures, which constrain the energy balance.We proposed to remedy this omission with STIS E140H observations of the Si III 1206 Ang. resonance emission line for two of the best studied red giants: Arcturus (alpha Boo: K2 III) and Aldebaran (alpha Tau: K5 III), both of which have detailed semi-empirical wind velocity models. The relative optical depths of wind scattered absorption in Si III 1206 Ang., O I 1303 Ang. triplet., C II 1335 Ang., and existing Mg II h & k and Fe II profiles give the wind temperatures through the thermally controlled ionization balance. The new temperature constraints will be used to test existing semi-empirical models by comparision with multi-frequency JVLA radio fluxes, and also to constrain the flux-tube geometry and wave energy spectrum of magnetic wave-driven winds.

  13. Determination of astrophysical parameters of quasars within the Gaia mission

    NASA Astrophysics Data System (ADS)

    Delchambre, L.

    2018-01-01

    We describe methods designed to determine the astrophysical parameters of quasars based on spectra coming from the red and blue spectrophotometers of the Gaia satellite. These methods principally rely on two already published algorithms that are the weighted principal component analysis and the weighted phase correlation. The presented approach benefits from a fast implementation, an intuitive interpretation as well as strong diagnostic tools on the potential errors that may arise during predictions. The production of a semi-empirical library of spectra as they will be observed by Gaia is also covered and subsequently used for validation purpose. We detail the pre-processing that is necessary in order for these spectra to be fully exploitable by our algorithms along with the procedures that are used to predict the redshifts of the quasars, their continuum slopes, the total equivalent width of their emission lines and whether these are broad absorption line (BAL) quasars or not. Performances of these procedures were assessed in comparison with the extremely randomized trees learning method and were proven to provide better results on the redshift predictions and on the ratio of correctly classified observations though the probability of detection of BAL quasars remains restricted by the low resolution of these spectra as well as by their limited signal-to-noise ratio. Finally, the triggering of some warning flags allows us to obtain an extremely pure subset of redshift predictions where approximately 99 per cent of the observations come along with absolute errors that are below 0.1.

  14. Astrophysics Meets Atomic Physics: Fe I Line Identifications and Templates for Old Stellar Populations from Warm and Hot Stellar UV Spectra

    NASA Astrophysics Data System (ADS)

    Peterson, Ruth

    2017-08-01

    Imaging surveys from the ultraviolet to the infrared are recording ever more distant astronomical sources. Needed to interpret them are high-resolution ultraviolet spectral templates at all metallicities for both old and intermediate-age stars, and the atomic physics data essential to model their spectra. To this end we are proposing new UV spectra of four warm and hot stars spanning a wide range of metallicity. These will provide observational templates of old and young metal-poor turnoff stars, and the laboratory source for the identification of thousands of lines of neutral iron that appear in stellar spectra but are not identified in laboratory spectra. By matching existing and new stellar spectra to calculations of energy levels, line wavelengths, and gf-values, Peterson & Kurucz (2015) and Peterson, Kurucz, & Ayres (2017) identified 124 Fe I levels with energies up to 8.4eV. These provided 3000 detectable Fe I lines from 1600A to 5.4mu, and yielded empirical gf-values for 640 of these. Here we propose high-resolution UV spectra reaching 1780A for the first time at the turnoff, to detect and identify the strongest Fe I lines at 1800 - 1850A. This should add 250 new Fe I levels. These spectra, plus one at lower resolution reaching 1620A, will also provide empirical UV templates for turnoff stars at high redshifts as well as low. This is essential to deriving age and metallicity independently for globular clusters and old galaxies out to z 3. It will also improve abundances of trace elements in metal-poor stars, constraining nucleosynthesis at early epochs and aiding the reconstruction of the populations of the Milky Way halo and of nearby globular clusters.

  15. Power in health care organizations: contemplations from the first-line management perspective.

    PubMed

    Isosaari, Ulla

    2011-01-01

    The aim of this paper is to examine health care organizations' power structures from the first-line management perspective. What liable power structures derive from the theoretical bases of bureaucratic, professional and result based organizations, and what power type do health care organizations represent, according to the empirical data? The paper seeks to perform an analysis using Mintzberg's power configurations of instrument, closed system, meritocracy and political arena. The empirical study was executed at the end of 2005 through a survey in ten Finnish hospital districts in both specialized and primary care. Respondents were all first-line managers in the area and a sample of staff members from internal disease, surgical and psychiatric units, as well as out-patient and primary care units. The number of respondents was 1,197 and the response percentage was 38. The data were analyzed statistically. As a result, it can be seen that a certain kind of organization structure supports the generation of a certain power type. A bureaucratic organization generates an instrument or closed system organization, a professional organization generates meritocracy and also political arena, and a result-based organization has a connection to political arena and meritocracy. First line managers regarded health care organizations as instruments when staff regarded them mainly as meritocracies having features of political arena. Managers felt their position to be limited by rules, whereas staff members regarded their position as having lots of space and influence potential. If the organizations seek innovative and active managers at the unit level, they should change the organizational structure and redistribute the work so that there could be more space for meaningful management. This research adds to the literature and gives helpful suggestions that will be of interest to those in the position of first-line management in health care.

  16. HuH-7 reference genome profile: complex karyotype composed of massive loss of heterozygosity.

    PubMed

    Kasai, Fumio; Hirayama, Noriko; Ozawa, Midori; Satoh, Motonobu; Kohara, Arihiro

    2018-05-17

    Human cell lines represent a valuable resource as in vitro experimental models. A hepatoma cell line, HuH-7 (JCRB0403), has been used extensively in various research fields and a number of studies using this line have been published continuously since it was established in 1982. However, an accurate genome profile, which can be served as a reliable reference, has not been available. In this study, we performed M-FISH, SNP microarray and amplicon sequencing to characterize the cell line. Single cell analysis of metaphases revealed a high level of heterogeneity with a mode of 60 chromosomes. Cytogenetic results demonstrated chromosome abnormalities involving every chromosome in addition to a massive loss of heterozygosity, which accounts for 55.3% of the genome, consistent with the homozygous variants seen in the sequence analysis. We provide empirical data that the HuH-7 cell line is composed of highly heterogeneous cell populations, suggesting that besides cell line authentication, the quality of cell lines needs to be taken into consideration in the future use of tumor cell lines.

  17. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  18. Modelling of sedimentation and remobilization in in-line storage sewers for stormwater treatment.

    PubMed

    Frehmann, T; Flores, C; Luekewille, F; Mietzel, T; Spengler, B; Geiger, W F

    2005-01-01

    A special arrangement of combined sewer overflow tanks is the in-line storage sewer with downstream discharge (ISS-down). This layout has the advantage that, besides the sewer system, no other structures are required for stormwater treatment. The verification of the efficiency with respect to the processes of sedimentation and remobilization of sediment within the in-line storage sewer with downstream discharge is carried out in a combination of a field and a pilot plant study. The model study was carried out using a pilot plant model scaled 1:13. The following is intended to present some results of the pilot plant study and the mathematical empirical modelling of the sedimentation and remobilization process.

  19. Treatment of Anxiety and Depression in the Preschool Period

    PubMed Central

    Luby, Joan L.

    2013-01-01

    Objective Empirical studies have now established that clinical anxiety and depressive disorders may arise in preschool children as early as age 3.0. As empirical studies validating and characterizing these disorders in preschoolers are relatively recent, less work has been done on the development and testing of age-appropriate treatments. Method A comprehensive literature search revealed several small randomized controlled trials (RCTs) of psychotherapeutic treatments for preschool anxiety and depression. The literature also contains case series of behavioral and psychopharmacologic interventions for specific anxiety disorders. However, to date, no large-scale RCTs of treatment for any anxiety or depressive disorder specifically targeting preschool populations have been published. Results Several age-adapted forms of cognitive behavioral therapy have been developed and preliminarily tested in small RCTs, and appear promising for a variety of forms of preschool anxiety disorders. Notably, these adaptations centrally involve primary caregivers and utilize age-adjusted methodology such as cartoon-based materials and co-constructed drawing or narratives. Modified forms of Parent Child Interaction Therapy (PCIT) have been tested and appear promising for both anxiety and depression. While preventive interventions that target parenting have shown significant promise in anxiety, these methods have not been explored in area of early childhood depression. Studies of the impact of parental treatment on infants suggest that direct treatment of the youngest children may be necessary to affect long-term change. Conclusions Recommendations are made for clinical treatment of these disorders where psychotherapy is the first line of intervention. PMID:23582866

  20. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  1. Categories of Large Numbers in Line Estimation

    ERIC Educational Resources Information Center

    Landy, David; Charlesworth, Arthur; Ottmar, Erin

    2017-01-01

    How do people stretch their understanding of magnitude from the experiential range to the very large quantities and ranges important in science, geopolitics, and mathematics? This paper empirically evaluates how and whether people make use of numerical categories when estimating relative magnitudes of numbers across many orders of magnitude. We…

  2. Improvement of a Nonlinear Internal Wave Tactical Decision Aid

    DTIC Science & Technology

    2009-01-01

    solitons originating in the Luzon Strait and propagating across South China Sea as well as the solitons in the Sulu and Celebes Seas. The prediction...lines are wave observations from moorings B1 ( dark ) and P1 (light) IMPACT/APPLICATIONS An empirical model for estimating the geographic

  3. Below the Salary Line: Employee Engagement of Non-Salaried Employees

    ERIC Educational Resources Information Center

    Shuck, Brad; Albornoz, Carlos

    2007-01-01

    This exploratory empirical phenomological study looks at employee engagement using Kahn (1990) and Maslow's (1970) motivational theories to understand the experience of non-salaried employees. This study finds four themes that seem to affect employee engagement: work environment, employee's supervisor, individual characteristics of the employee,…

  4. Amalgamation of Future Time Orientation, Epistemological Beliefs, Achievement Goals and Study Strategies: Empirical Evidence Established

    ERIC Educational Resources Information Center

    Phan, Huy P.

    2009-01-01

    Background: Recently research evidence emphasizes two main lines of inquiry, namely the relations between future time perspective (FTP), achievement goals (mastery, performance-approach, and performance-avoidance) and study processing strategies, and the relations between epistemological beliefs, achievement goals and study processing strategies.…

  5. Toward an Integrated Approach to Positive Development: Implications for Intervention

    ERIC Educational Resources Information Center

    Tolan, Patrick; Ross, Katherine; Arkin, Nora; Godine, Nikki; Clark, Erin

    2016-01-01

    Positive development models shift focus for intervention from avoiding problems, deficits, or psychopathology to promoting skills, assets, and psychological well-being as the critical interests in development and intervention. The field can be characterized as multiple parallel lines of empirical inquiry from four frameworks: Social Competence,…

  6. The (Campus) Empire Strikes Back

    ERIC Educational Resources Information Center

    Archibald, Fred

    2008-01-01

    When it comes to anti-malware protection, today's university IT departments have their work cut out for them. Network managers must walk the fine line between enabling a highly collaborative, non-restrictive environment, and ensuring the confidentiality, integrity, and availability of data and computing resources. This is no easy task, especially…

  7. Two Attentional Models of Classical Conditioning: Variations in CS Effectiveness Revisited.

    DTIC Science & Technology

    1987-04-03

    probability is in closer agreement with empirical expectations, tending to lie on a line with slope equal to 1. Experiments in pigeon autoshaping have shown...Gibbon, J., Farrell, L., Locurto, C.M., Duncan, H., & Terrace, H.S. (1980). Partial reinforcement in autoshaping with pigeons. Animal Learning and

  8. One-dimensional analysis of supersonic two-stage HVOF process

    NASA Astrophysics Data System (ADS)

    Katanoda, Hiroshi; Hagi, Junichi; Fukuhara, Minoru

    2009-12-01

    The one-dimensional calculation of the gas/particle flows of a supersonic two-stage high-velocity oxy-fuel (HVOF) thermal spray process was performed. The internal gas flow was solved by numerically integrating the equations of the quasi-one-dimensional flow including the effects of pipe friction and heat transfer. As for the supersonic jet flow, semi-empirical equations were used to obtain the gas velocity and temperature along the center line. The velocity and temperature of the particle were obtained by an one-way coupling method. The material of the spray particle selected in this study is ultra high molecular weight polyethylene (UHMWPE). The temperature distributions in the spherical UHMWPE particles of 50 and 150µm accelerated and heated by the supersonic gas flow was clarified.

  9. ExoMol line lists - XXIX. The rotation-vibration spectrum of methyl chloride up to 1200 K

    NASA Astrophysics Data System (ADS)

    Owens, A.; Yachmenev, A.; Thiel, W.; Fateev, A.; Tennyson, J.; Yurchenko, S. N.

    2018-06-01

    Comprehensive rotation-vibration line lists are presented for the two main isotopologues of methyl chloride, 12CH335Cl and 12CH337Cl. The line lists, OYT-35 and OYT-37, are suitable for temperatures up to T = 1200 K and consider transitions with rotational excitation up to J = 85 in the wavenumber range 0-6400 cm-1 (wavelengths λ > 1.56 μm). Over 166 billion transitions between 10.2 million energy levels have been calculated variationally for each line list using a new empirically refined potential energy surface, determined by refining to 739 experimentally derived energy levels up to J = 5, and an established ab initio dipole moment surface. The OYT line lists show excellent agreement with newly measured high-temperature infrared absorption cross-sections, reproducing both strong and weak intensity features across the spectrum. The line lists are available from the ExoMol database and the CDS database.

  10. Measurements of Electron Impact Excitation Cross Sections at the Harvard-Smithsonian Center for Astrophysics

    NASA Technical Reports Server (NTRS)

    Gardner, L. D.; Kohl, J. L.

    2006-01-01

    The analysis of absolute spectral line intensities and intensity ratios with spectroscopic diagnostic techniques provides empirical determinations of chemical abundances, electron densities and temperatures in astrophysical objects. Since spectral line intensities and their ratios are controlled by the excitation rate coefficients for the electron temperature of the observed astrophysical structure, it is imperative that one have accurate values for the relevant rate coefficients. Here at the Harvard-Smithsonian Center for Astrophysics, we have been carrying out measurements of electron impact excitation (EIE) for more than 25 years.

  11. Spectral analysis of early-type stars using a genetic algorithm based fitting method

    NASA Astrophysics Data System (ADS)

    Mokiem, M. R.; de Koter, A.; Puls, J.; Herrero, A.; Najarro, F.; Villamariz, M. R.

    2005-10-01

    We present the first automated fitting method for the quantitative spectroscopy of O- and early B-type stars with stellar winds. The method combines the non-LTE stellar atmosphere code fastwind from Puls et al. (2005, A&A, 435, 669) with the genetic algorithm based optimization routine pikaia from Charbonneau (1995, ApJS, 101, 309), allowing for a homogeneous analysis of upcoming large samples of early-type stars (e.g. Evans et al. 2005, A&A, 437, 467). In this first implementation we use continuum normalized optical hydrogen and helium lines to determine photospheric and wind parameters. We have assigned weights to these lines accounting for line blends with species not taken into account, lacking physics, and/or possible or potential problems in the model atmosphere code. We find the method to be robust, fast, and accurate. Using our method we analysed seven O-type stars in the young cluster Cyg OB2 and five other Galactic stars with high rotational velocities and/or low mass loss rates (including 10 Lac, ζ Oph, and τ Sco) that have been studied in detail with a previous version of fastwind. The fits are found to have a quality that is comparable or even better than produced by the classical “by eye” method. We define errorbars on the model parameters based on the maximum variations of these parameters in the models that cluster around the global optimum. Using this concept, for the investigated dataset we are able to recover mass-loss rates down to ~6 × 10-8~M⊙ yr-1 to within an error of a factor of two, ignoring possible systematic errors due to uncertainties in the continuum normalization. Comparison of our derived spectroscopic masses with those derived from stellar evolutionary models are in very good agreement, i.e. based on the limited sample that we have studied we do not find indications for a mass discrepancy. For three stars we find significantly higher surface gravities than previously reported. We identify this to be due to differences in the weighting of Balmer line wings between our automated method and “by eye” fitting and/or an improved multidimensional optimization of the parameters. The empirical modified wind momentum relation constructed on the basis of the stars analysed here agrees to within the error bars with the theoretical relation predicted by Vink et al. (2000, A&A, 362, 295), including those cases for which the winds are weak (i.e. less than a few times 10-7 M⊙ yr-1).

  12. Chronic Fatigue Syndrome and Myalgic Encephalomyelitis: Toward An Empirical Case Definition

    PubMed Central

    Jason, Leonard A.; Kot, Bobby; Sunnquist, Madison; Brown, Abigail; Evans, Meredyth; Jantke, Rachel; Williams, Yolonda; Furst, Jacob; Vernon, Suzanne D.

    2015-01-01

    Current case definitions of Myalgic Encephalomyelitis (ME) and chronic fatigue syndrome (CFS) have been based on consensus methods, but empirical methods could be used to identify core symptoms and thereby improve the reliability. In the present study, several methods (i.e., continuous scores of symptoms, theoretically and empirically derived cut off scores of symptoms) were used to identify core symptoms best differentiating patients from controls. In addition, data mining with decision trees was conducted. Our study found a small number of core symptoms that have good sensitivity and specificity, and these included fatigue, post-exertional malaise, a neurocognitive symptom, and unrefreshing sleep. Outcomes from these analyses suggest that using empirically selected symptoms can help guide the creation of a more reliable case definition. PMID:26029488

  13. Theoretical Stark broadening parameters for spectral lines arising from the 2p5ns, 2p5np and 2p5nd electronic configurations of Mg III

    NASA Astrophysics Data System (ADS)

    Colón, C.; Moreno-Díaz, C.; Alonso-Medina, A.

    2013-10-01

    In the present work we report theoretical Stark widths and shifts calculated using the Griem semi-empirical approach, corresponding to 237 spectral lines of Mg III. Data are presented for an electron density of 1017 cm-3 and temperatures T = 0.5-10.0 (104K). The matrix elements used in these calculations have been determined from 23 configurations of Mg III: 2s22p6, 2s22p53p, 2s22p54p, 2s22p54f and 2s22p55f for even parity and 2s22p5ns (n = 3-6), 2s22p5nd (n = 3-9), 2s22p55g and 2s2p6np (n = 3-8) for odd parity. For the intermediate coupling (IC) calculations, we use the standard method of least-squares fitting from experimental energy levels by means of the Cowan computer code. Also, in order to test the matrix elements used in our calculations, we present calculated values of 70 transition probabilities of Mg III spectral lines and 14 calculated values of radiative lifetimes of Mg III levels. There is good agreement between our calculations and experimental radiative lifetimes. Spectral lines of Mg III are relevant in astrophysics and also play an important role in the spectral analysis of laboratory plasma. Theoretical trends of the Stark broadening parameter versus the temperature for relevant lines are presented. No values of Stark parameters can be found in the bibliography.

  14. A survey on hematology-oncology pediatric AIEOP centers: prophylaxis, empirical therapy and nursing prevention procedures of infectious complications

    PubMed Central

    Livadiotti, Susanna; Milano, Giuseppe Maria; Serra, Annalisa; Folgori, Laura; Jenkner, Alessandro; Castagnola, Elio; Cesaro, Simone; Rossi, Mario R.; Barone, Angelica; Zanazzo, Giulio; Nesi, Francesca; Licciardello, Maria; De Santis, Raffaella; Ziino, Ottavio; Cellini, Monica; Porta, Fulvio; Caselli, Desiree; Pontrelli, Giuseppe

    2012-01-01

    A nationwide questionnaire-based survey was designed to evaluate the management and prophylaxis of febrile neutropenia in pediatric patients admitted to hematology-oncology and hematopoietic stem cell transplant units. Of the 34 participating centers, 40 and 63%, respectively, continue to prescribe antibacterial and antimycotic prophylaxis in low-risk subjects and 78 and 94% in transplant patients. Approximately half of the centers prescribe a combination antibiotic regimen as first-line therapy in low-risk patients and up to 81% in high-risk patients. When initial empirical therapy fails after seven days, 63% of the centers add empirical antimycotic therapy in low-and 81% in high-risk patients. Overall management varies significantly across centers. Preventive nursing procedures are in accordance with international guidelines. This survey is the first to focus on prescribing practices in children with cancer and could help to implement practice guidelines. PMID:21993676

  15. Preequating with Empirical Item Characteristic Curves: An Observed-Score Preequating Method

    ERIC Educational Resources Information Center

    Zu, Jiyun; Puhan, Gautam

    2014-01-01

    Preequating is in demand because it reduces score reporting time. In this article, we evaluated an observed-score preequating method: the empirical item characteristic curve (EICC) method, which makes preequating without item response theory (IRT) possible. EICC preequating results were compared with a criterion equating and with IRT true-score…

  16. A Comparison of Two Scoring Methods for an Automated Speech Scoring System

    ERIC Educational Resources Information Center

    Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David

    2012-01-01

    This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…

  17. Apparent and microscopic dynamic contact angles in confined flows

    NASA Astrophysics Data System (ADS)

    Omori, Takeshi; Kajishima, Takeo

    2017-11-01

    An abundance of empirical correlations between a dynamic contact angle and a capillary number representing a translational velocity of a contact line have been provided for the last decades. The experimentally obtained dynamic contact angles are inevitably apparent contact angles but often undistinguished from microscopic contact angles formed right on the wall. As Bonn et al. ["Wetting and spreading," Rev. Mod. Phys. 81, 739-805 (2009)] pointed out, however, most of the experimental studies simply report values of angles recorded at some length scale which is quantitatively unknown. It is therefore hard to evaluate or judge the physical validity and the generality of the empirical correlations. The present study is an attempt to clear this clutter regarding the dynamic contact angle by measuring both the apparent and the microscopic dynamic contact angles from the identical data sets in a well-controlled manner, by means of numerical simulation. The numerical method was constructed so that it reproduced the fine details of the flow with a moving contact line predicted by molecular dynamics simulations [T. Qian, X. Wang, and P. Sheng, "Molecular hydrodynamics of the moving contact line in two-phase immiscible flows," Commun. Comput. Phys. 1, 1-52 (2006)]. We show that the microscopic contact angle as a function of the capillary number has the same form as Blake's molecular-kinetic model [T. Blake and J. Haynes, "Kinetics of liquid/liquid displacement," J. Colloid Interface Sci. 30, 421-423 (1969)], regardless of the way the flow is driven, the channel width, the mechanical properties of the receding fluid, and the value of the equilibrium contact angle under the conditions where the Reynolds and capillary numbers are small. We have also found that the apparent contact angle obtained by the arc-fitting of the interface behaves surprisingly universally as claimed in experimental studies in the literature [e.g., X. Li et al., "An experimental study on dynamic pore wettability," Chem. Eng. Sci. 104, 988-997 (2013)], although the angle deviates significantly from the microscopic contact angle. It leads to a practically important point that it suffices to measure arc-fitted contact angles to make formulae to predict flow rates in capillary tubes.

  18. From empirical Bayes to full Bayes : methods for analyzing traffic safety data.

    DOT National Transportation Integrated Search

    2004-10-24

    Traffic safety engineers are among the early adopters of Bayesian statistical tools for : analyzing crash data. As in many other areas of application, empirical Bayes methods were : their first choice, perhaps because they represent an intuitively ap...

  19. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  20. xEMD procedures as a data - Assisted filtering method

    NASA Astrophysics Data System (ADS)

    Machrowska, Anna; Jonak, Józef

    2018-01-01

    The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.

  1. Spectroscopy of Solid State Laser Materials

    NASA Technical Reports Server (NTRS)

    Buoncristiani, A. M.

    1994-01-01

    We retrieved the vertical distribution of ozone from a series 0.005-0.013/cm resolution infrared solar spectra recorded with the McMath Fourier Transform spectrometer at the Kitt Peak National Solar Observatory. The analysis is based on a multi-layer line-by-line forward model and a semi-empirical version of the optimal estimation inversion method by Rodgers. The 1002.6-1003.2/cm spectral interval has been selected for the analysis on the basis of synthetic spectrum calculations. The characterization and error analysis of the method have been performed. It was shown that for the Kitt Peak spectral resolution and typical signal-to-noise ratio (greater than or equal to 100) the retrieval is stable, with the vertical resolution of approximately 5 km attainable near the surface degrading to approximately 10 km in the stratosphere. Spectra recorded from 1980 through 1993 have been analyzed. The retrieved total ozone and vertical profiles have been compared with total ozone mapping spectrometer (TOMS) satellite total columns for the location and dates of the Kitt Peak Measurements and about 100 ozone ozonesoundings and Brewer total column measurements from Palestine, Texas, from 1979 to 1985. The total ozone measurements agree to +/- 2%. The retrieved profiles reproduce the seasonally averaged variations with altitude, including the ozone spring maximum and fall minimum measured by Palestine sondes, but up to 15% differences in the absolute values are obtained.

  2. Improving the quality of marine geophysical track line data: Along-track analysis

    NASA Astrophysics Data System (ADS)

    Chandler, Michael T.; Wessel, Paul

    2008-02-01

    We have examined 4918 track line geophysics cruises archived at the U.S. National Geophysical Data Center (NGDC) using comprehensive error checking methods. Each cruise was checked for observation outliers, excessive gradients, metadata consistency, and general agreement with satellite altimetry-derived gravity and predicted bathymetry grids. Thresholds for error checking were determined empirically through inspection of histograms for all geophysical values, gradients, and differences with gridded data sampled along ship tracks. Robust regression was used to detect systematic scale and offset errors found by comparing ship bathymetry and free-air anomalies to the corresponding values from global grids. We found many recurring error types in the NGDC archive, including poor navigation, inappropriately scaled or offset data, excessive gradients, and extended offsets in depth and gravity when compared to global grids. While ˜5-10% of bathymetry and free-air gravity records fail our conservative tests, residual magnetic errors may exceed twice this proportion. These errors hinder the effective use of the data and may lead to mistakes in interpretation. To enable the removal of gross errors without over-writing original cruise data, we developed an errata system that concisely reports all errors encountered in a cruise. With such errata files, scientists may share cruise corrections, thereby preventing redundant processing. We have implemented these quality control methods in the modified MGD77 supplement to the Generic Mapping Tools software suite.

  3. Robust imaging and gene delivery to study human lymphoblastoid cell lines.

    PubMed

    Jolly, Lachlan A; Sun, Ying; Carroll, Renée; Homan, Claire C; Gecz, Jozef

    2018-06-20

    Lymphoblastoid cell lines (LCLs) have been by far the most prevalent cell type used to study the genetics underlying normal and disease-relevant human phenotypic variation, across personal to epidemiological scales. In contrast, only few studies have explored the use of LCLs in functional genomics and mechanistic studies. Two major reasons are technical, as (1) interrogating the sub-cellular spatial information of LCLs is challenged by their non-adherent nature, and (2) LCLs are refractory to gene transfection. Methodological details relating to techniques that overcome these limitations are scarce, largely inadequate (without additional knowledge and expertise), and optimisation has never been described. Here we compare, optimise, and convey such methods in-depth. We provide a robust method to adhere LCLs to coverslips, which maintained cellular integrity, morphology, and permitted visualisation of sub-cellular structures and protein localisation. Next, we developed the use of lentiviral-based gene delivery to LCLs. Through empirical and combinatorial testing of multiple transduction conditions, we improved transduction efficiency from 3% up to 48%. Furthermore, we established strategies to purify transduced cells, to achieve sustainable cultures containing >85% transduced cells. Collectively, our methodologies provide a vital resource that enables the use of LCLs in functional cell and molecular biology experiments. Potential applications include the characterisation of genetic variants of unknown significance, the interrogation of cellular disease pathways and mechanisms, and high-throughput discovery of genetic modifiers of disease states among others.

  4. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  5. ExoMol line lists - XXII. The rotation-vibration spectrum of silane up to 1200 K

    NASA Astrophysics Data System (ADS)

    Owens, A.; Yachmenev, A.; Thiel, W.; Tennyson, J.; Yurchenko, S. N.

    2017-11-01

    A variationally computed 28SiH4 rotation-vibration line list applicable for temperatures up to T = 1200 K is presented. The line list, called OY2T, considers transitions with rotational excitation up to J = 42 in the wavenumber range 0-5000 cm-1 (wavelengths λ > 2 μm). Just under 62.7 billion transitions have been calculated between 6.1 million energy levels. Rovibrational calculations have utilized a new `spectroscopic' potential energy surface determined by empirical refinement to 1452 experimentally derived energy levels up to J = 6, and a previously reported ab initio dipole moment surface. The temperature-dependent partition function of silane, the OY2T line list format, and the temperature dependence of the OY2T line list are discussed. Comparisons with the PNNL spectral library and other experimental sources indicate that the OY2T line list is robust and able to accurately reproduce weaker intensity features. The full line list is available from the ExoMol data base and the CDS data base.

  6. [Connotation characterization and evaluation of ecological well-being based on ecosystem service theory.

    PubMed

    Zang, Zheng; Zou, Xin- Qing

    2016-04-22

    China is advocating ecological civilization construction nowadays. Further researches on the relation between ecosystem service and humanity well-being are full of theoretical and practical significance. Combining related researches, this paper defined the concept and connotation of ecological well-being based on ecosystem service theory. Referencing theory of national economic accounting and relative researches, the evaluation indicators of ecological well-being supply and consumption were established. The quantitative characterization and evaluation method of red line of regional ecological well-being was proposed on the basis of location quotient. Then the evaluation of ecological well-being in mainland China in 2012 was set as an example for empirical research. The results showed that the net product values of 6 ecosystems, includingcultivated land, forest land, grassland, wetland, water area and unused land, were respectively 1481.925, 8194.806, 4176.277, 4245.760, 3177.084 and 133.762 billion CNY. Spatial heterogeneity of ecosystem net product in different provinces was distinct. Ecological well-being per capita of forest land, grassland, wetland, cultivated land and unused land in eastern and middle provinces were under the red line and less than the national average. The spatial distribution of 9 kinds of ecological well-being per capita split at Hu's line with high value in northwest and low value in southeast, and was aggravated by differences in density of population and land resources gift.

  7. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  8. Barriers, facilitators and preferences for the physical activity of school children. Rationale and methods of a mixed study

    PubMed Central

    2012-01-01

    Background Physical activity interventions in schools environment seem to have shown some effectiveness in the control of the current obesity epidemic in children. However the complexity of behaviors and the diversity of influences related to this problem suggest that we urgently need new lines of insight about how to support comprehensive population strategies of intervention. The aim of this study was to know the perceptions of the children from Cuenca, about their environmental barriers, facilitators and preferences for physical activity. Methods/Design We used a mixed-method design by combining two qualitative methods (analysis of individual drawings and focus groups) together with the quantitative measurement of physical activity through accelerometers, in a theoretical sample of 121 children aged 9 and 11 years of schools in the province of Cuenca, Spain. Conclusions Mixed-method study is an appropriate strategy to know the perceptions of children about barriers and facilitators for physical activity, using both qualitative methods for a deeply understanding of their points of view, and quantitative methods for triangulate the discourse of participants with empirical data. We consider that this is an innovative approach that could provide knowledges for the development of more effective interventions to prevent childhood overweight. PMID:22978490

  9. Analysis of airborne MAIS imaging spectrometric data for mineral exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Jinnian; Zheng Lanfen; Tong Qingxi

    1996-11-01

    The high spectral resolution imaging spectrometric system made quantitative analysis and mapping of surface composition possible. The key issue will be the quantitative approach for analysis of surface parameters for imaging spectrometer data. This paper describes the methods and the stages of quantitative analysis. (1) Extracting surface reflectance from imaging spectrometer image. Lab. and inflight field measurements are conducted for calibration of imaging spectrometer data, and the atmospheric correction has also been used to obtain ground reflectance by using empirical line method and radiation transfer modeling. (2) Determining quantitative relationship between absorption band parameters from the imaging spectrometer data andmore » chemical composition of minerals. (3) Spectral comparison between the spectra of spectral library and the spectra derived from the imagery. The wavelet analysis-based spectrum-matching techniques for quantitative analysis of imaging spectrometer data has beer, developed. Airborne MAIS imaging spectrometer data were used for analysis and the analysis results have been applied to the mineral and petroleum exploration in Tarim Basin area china. 8 refs., 8 figs.« less

  10. Deriving photometric redshifts using fuzzy archetypes and self-organizing maps - I. Methodology

    NASA Astrophysics Data System (ADS)

    Speagle, Joshua S.; Eisenstein, Daniel J.

    2017-07-01

    We propose a method to substantially increase the flexibility and power of template fitting-based photometric redshifts by transforming a large number of galaxy spectral templates into a corresponding collection of 'fuzzy archetypes' using a suitable set of perturbative priors designed to account for empirical variation in dust attenuation and emission-line strengths. To bypass widely separated degeneracies in parameter space (e.g. the redshift-reddening degeneracy), we train self-organizing maps (SOMs) on large 'model catalogues' generated from Monte Carlo sampling of our fuzzy archetypes to cluster the predicted observables in a topologically smooth fashion. Subsequent sampling over the SOM then allows full reconstruction of the relevant probability distribution functions (PDFs). This combined approach enables the multimodal exploration of known variation among galaxy spectral energy distributions with minimal modelling assumptions. We demonstrate the power of this approach to recover full redshift PDFs using discrete Markov chain Monte Carlo sampling methods combined with SOMs constructed from Large Synoptic Survey Telescope ugrizY and Euclid YJH mock photometry.

  11. Computing frequency by using generalized zero-crossing applied to intrinsic mode functions

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2006-01-01

    This invention presents a method for computing Instantaneous Frequency by applying Empirical Mode Decomposition to a signal and using Generalized Zero-Crossing (GZC) and Extrema Sifting. The GZC approach is the most direct, local, and also the most accurate in the mean. Furthermore, this approach will also give a statistical measure of the scattering of the frequency value. For most practical applications, this mean frequency localized down to quarter of a wave period is already a well-accepted result. As this method physically measures the period, or part of it, the values obtained can serve as the best local mean over the period to which it applies. Through Extrema Sifting, instead of the cubic spline fitting, this invention constructs the upper envelope and the lower envelope by connecting local maxima points and local minima points of the signal with straight lines, respectively, when extracting a collection of Intrinsic Mode Functions (IMFs) from a signal under consideration.

  12. Protonation effects on the UV/Vis absorption spectra of imatinib: a theoretical and experimental study.

    PubMed

    Grante, Ilze; Actins, Andris; Orola, Liana

    2014-08-14

    An experimental and theoretical investigation of protonation effects on the UV/Vis absorption spectra of imatinib showed systematic changes of absorption depending on the pH, and a new absorption band appeared below pH 2. These changes in the UV/Vis absorption spectra were interpreted using quantum chemical calculations. The geometry of various imatinib cations in the gas phase and in ethanol solution was optimized with the DFT/B3LYP method. The resultant geometries were compared to the experimentally determined crystal structures of imatinib salts. The semi-empirical ZINDO-CI method was employed to calculate the absorption lines and electronic transitions. Our study suggests that the formation of the extra near-UV absorption band resulted from an increase of imatinib trication concentration in the solution, while the rapid increase of the first absorption maximum could be attributed to both the formation of imatinib trication and tetracation. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. An empirical inferential method of estimating nitrogen deposition to Mediterranean-type ecosystems: the San Bernardino Mountains case study

    Treesearch

    A. Bytnerowicz; R.F. Johnson; L. Zhang; G.D. Jenerette; M.E. Fenn; S.L. Schilling; I. Gonzalez-Fernandez

    2015-01-01

    The empirical inferential method (EIM) allows for spatially and temporally-dense estimates of atmospheric nitrogen (N) deposition to Mediterranean ecosystems. This method, set within a GIS platform, is based on ambient concentrations of NH3, NO, NO2 and HNO3; surface conductance of NH4...

  14. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  15. CLASP2: The Chromospheric LAyer Spectro-Polarimeter

    NASA Technical Reports Server (NTRS)

    Mckenzie, D. E.; Ishikawa, R.; Bueno, J. Trujillo; Auchere, F.; Rachmeler, L.; Kubo, M.; Kobayashi, K.; Winebarger, A.; Bethge, C.; Narukage, N.; hide

    2017-01-01

    A major remaining challenge for heliophysicsis to decipher the magnetic structure of the chromosphere, due to its "large role in defining how energy is transported into the corona and solar wind" (NASA's Heliophysics Roadmap). Recent observational advances enabled by the Interface Region Imaging Spectrometer (IRIS) have revolutionized our view of the critical role this highly dynamic interface between the photosphere and corona plays in energizing and structuring the outer solar atmosphere. Despite these advances, a major impediment to better understanding the solar atmosphere is our lack of empirical knowledge regarding the direction and strength of the magnetic field in the upper chromosphere. Such measurements are crucial to address several major unresolved issues in solar physics: for example, to constrain the energy flux carried by the Alfven waves propagating through the chromosphere (De Pontieuet al., 2014), and to determine the height at which the plasma Beta = 1 transition occurs, which has important consequences for the braiding of magnetic fields (Cirtainet al., 2013; Guerreiroet al., 2014), for propagation and mode conversion of waves (Tian et al., 2014a; Straus et al., 2008) and for non-linear force-free extrapolation methods that are key to determining what drives instabilities such as flares or coronal mass ejections (e.g.,De Rosa et al., 2009). The most reliable method used to determine the solar magnetic field vector is the observation and interpretation of polarization signals in spectral lines, associated with the Zeeman and Hanle effects. Magnetically sensitive ultraviolet spectral lines formed in the upper chromosphere and transition region provide a powerful tool with which to probe this key boundary region (e.g., Trujillo Bueno, 2014). Probing the magnetic nature of the chromosphere requires measurement of the Stokes I, Q, U and V profiles of the relevant spectral lines (of which Q, U and V encode the magnetic field information).

  16. CLASP2: The Chromospheric LAyer Spectro-Polarimeter

    NASA Technical Reports Server (NTRS)

    McKenzie, D. E.; Ishikawa, R.; Bueno, J. Trujillo; Auchere, F.; Rachmeler, L; Kudo, M.; Kobayashi, K.; Winebarger, A.; Bethge, C.; Narukage, N.; hide

    2017-01-01

    A major remaining challenge for heliophysicsis to decipher the magnetic structure of the chromosphere, due to its 'large role in defining how energy is transported into the corona and solar wind' (NASA's Heliophysics Roadmap). Recent observational advances enabled by the Interface Region Imaging Spectrometer (IRIS) have revolutionized our view of the critical role this highly dynamic interface between the photosphere and corona plays in energizing and structuring the outer solar atmosphere. Despite these advances, a major impediment to better understanding the solar atmosphere is our lack of empirical knowledge regarding the direction and strength of the magnetic field in the upper chromosphere. Such measurements are crucial to address several major unresolved issues in solar physics: for example, to constrain the energy flux carried by the Alfven waves propagating through the chromosphere (De Pontieuet al., 2014), and to determine the height at which the plasma ß = 1 transition occurs, which has important consequences for the braiding of magnetic fields (Cirtainet al., 2013; Guerreiroet al., 2014), for propagation and mode conversion of waves (Tian et al., 2014a; Straus et al., 2008) and for non-linear force-free extrapolation methods that are key to determining what drives instabilities such as flares or coronal mass ejections (e.g., De Rosa et al., 2009). The most reliable method used to determine the solar magnetic field vector is the observation and interpretation of polarization signals in spectral lines, associated with the Zeeman and Hanle effects. Magnetically sensitive ultraviolet spectral lines formed in the upper chromosphere and transition region provide a powerful tool with which to probe this key boundary region (e.g., Trujillo Bueno, 2014). Probing the magnetic nature of the chromosphere requires measurement of the Stokes I, Q, U and V profiles of the relevant spectral lines (of which Q, U and V encode the magnetic field information).

  17. Brain coordination dynamics: True and false faces of phase synchrony and metastability

    PubMed Central

    Tognoli, Emmanuelle; Kelso, J.A. Scott

    2009-01-01

    Understanding the coordination of multiple parts in a complex system such as the brain is a fundamental challenge. We present a theoretical model of cortical coordination dynamics that shows how brain areas may cooperate (integration) and at the same time retain their functional specificity (segregation). This model expresses a range of desirable properties that the brain is known to exhibit, including self-organization, multi-functionality, metastability and switching. Empirically, the model motivates a thorough investigation of collective phase relationships among brain oscillations in neurophysiological data. The most serious obstacle to interpreting coupled oscillations as genuine evidence of inter-areal coordination in the brain stems from volume conduction of electrical fields. Spurious coupling due to volume conduction gives rise to zero-lag (inphase) and antiphase synchronization whose magnitude and persistence obscure the subtle expression of real synchrony. Through forward modeling and the help of a novel colorimetric method, we show how true synchronization can be deciphered from continuous EEG patterns. Developing empirical efforts along the lines of continuous EEG analysis constitutes a major response to the challenge of understanding how different brain areas work together. Key predictions of cortical coordination dynamics can now be tested thereby revealing the essential modus operandi of the intact living brain. PMID:18938209

  18. The Ca II infrared triplet's performance as an activity indicator compared to Ca II H and K. Empirical relations to convert Ca II infrared triplet measurements to common activity indices

    NASA Astrophysics Data System (ADS)

    Martin, J.; Fuhrmeister, B.; Mittag, M.; Schmidt, T. O. B.; Hempelmann, A.; González-Pérez, J. N.; Schmitt, J. H. M. M.

    2017-09-01

    Aims: A large number of Calcium infrared triplet (IRT) spectra are expected from the Gaia and CARMENES missions. Conversion of these spectra into known activity indicators will allow analysis of their temporal evolution to a better degree. We set out to find such a conversion formula and to determine its robustness. Methods: We have compared 2274 Ca II IRT spectra of active main-sequence F to K stars taken by the TIGRE telescope with those of inactive stars of the same spectral type. After normalizing and applying rotational broadening, we subtracted the comparison spectra to find the chromospheric excess flux caused by activity. We obtained the total excess flux, and compared it to established activity indices derived from the Ca II H and K lines, the spectra of which were obtained simultaneously to the infrared spectra. Results: The excess flux in the Ca II IRT is found to correlate well with R'HK and R+HK, as well as SMWO, if the B - V-dependency is taken into account. We find an empirical conversion formula to calculate the corresponding value of one activity indicator from the measurement of another, by comparing groups of datapoints of stars with similar B - V.

  19. Mirroring, Gestaltswitching and Transformative Social Learning: Stepping Stones for Developing Sustainability Competence

    ERIC Educational Resources Information Center

    Wals, Arjen E. J.

    2010-01-01

    Purpose: The purpose of this paper is to identify components and educational design principles for strengthening sustainability competence in and through higher education. Design/methodology/approach: This is a conceptual paper that uses an exemplary autobiographical empirical case study in order to illustrate and support a line of reasoning.…

  20. Mindfulness and Behavioral Parent Training: Commentary

    ERIC Educational Resources Information Center

    Eyberg, Sheila M.; Graham-Pole, John R.

    2005-01-01

    We review the description of mindfulness-based parent training (MBPT) and the argument that mindfulness practice offers a way to bring behavioral parent training (BPT) in line with current empirical knowledge. The strength of the proposed MBPT model is the attention it draws to process issues in BPT. We suggest, however, that it may not be…

  1. Undoing Bad Upbringing through Contemplation: An Aristotelian Reconstruction

    ERIC Educational Resources Information Center

    Kristjánsson, Kristján

    2014-01-01

    The aim of this article is to reconstruct two counter-intuitive Aristotelian theses--about contemplation as the culmination of the good life and about the impossibility of undoing bad upbringing--to bring them into line with current empirical research, as well as with the essentials of an overall Aristotelian approach to moral education. I start…

  2. Crossing the Line: When Pedagogical Relationships Go Awry

    ERIC Educational Resources Information Center

    Johnson, Tara Star

    2010-01-01

    Background/Context: Very little empirical research has been conducted on the issue of educator sexual misconduct (ESM) in secondary settings. The few reports available typically treat a larger social issue, such as sexual harassment or child abuse; therefore, data on ESM specifically must be extrapolated. When such data are obtained, the focus has…

  3. Encouraging Student Participation in an On-Line Course Using "Pull" Initiatives

    ERIC Educational Resources Information Center

    Peachey, Paul; Jones, Paul; Jones, Amanda

    2006-01-01

    This paper presents an empirical study involving initiatives that encouraged students to log onto online courses in entrepreneurship delivered by the University of Glamorgan. The aim of the research was to explore items of interest to the online students that may increase participation in the forums and hence potentially enhanced engagement with…

  4. Behaviour Change Policy Agendas for "Vulnerable" Subjectivities: The Dangers of Therapeutic Governance and Its New Entrepreneurs

    ERIC Educational Resources Information Center

    Ecclestone, Kathryn

    2017-01-01

    Apocalyptic crisis discourses of mental health problems and psycho-emotional dysfunction are integral to behaviour change agendas across seemingly different policy arenas. Bringing these agendas together opens up new theoretical and empirical lines of enquiry about the symbioses and contradictions surrounding the human subjects they target. The…

  5. Boosting the Potency of Resistance: Combining the Motivational Forces of Inoculation and Psychological Reactance

    ERIC Educational Resources Information Center

    Miller, Claude H.; Ivanov, Bobi; Sims, Jeanetta; Compton, Josh; Harrison, Kylie J.; Parker, Kimberly A.; Parker, James L.; Averbeck, Joshua M.

    2013-01-01

    The efficacy of inoculation theory has been confirmed by decades of empirical research, yet optimizing its effectiveness remains a vibrant line of investigation. The present research turns to psychological reactance theory for a means of enhancing the core mechanisms of inoculation--threat and refutational preemption. Findings from a multisite…

  6. Environmental Moderators of Genetic Influences on Adolescent Delinquent Involvement and Victimization

    ERIC Educational Resources Information Center

    Beaver, Kevin M.

    2011-01-01

    A growing body of empirical research reveals that genetic factors account for a substantial amount of variance in measures of antisocial behaviors. At the same time, evidence is also emerging indicating that certain environmental factors moderate the effects that genetic factors have on antisocial outcomes. Despite this line of research, much…

  7. An Empirical Typology of the Latent Programmatic Structure of Community College Student Success Programs

    ERIC Educational Resources Information Center

    Hatch, Deryl K.; Bohlig, E. Michael

    2016-01-01

    The definition and description of student success programs in the literature (e.g., orientation, first-year seminars, learning communities, etc.) suggest underlying programmatic similarities. Yet researchers to date typically depend on ambiguous labels to delimit studies, resulting in loosely related but separate research lines and few…

  8. Route Planning and Route Choice: An Empirical Investigation into Information Processing and Decision Making in Orienteering.

    ERIC Educational Resources Information Center

    Seiler, Roland

    1989-01-01

    Investigates kinds of map information selected and supplementary information desired by experienced orienteers. Reports that, based on lab and field studies, that contour lines were the most important map information, followed by information reducing physical or technical requirements. Concludes action theory is applicable to decision-making…

  9. Discovery of Par 1802 as a Low-Mass, Pre-Main-Sequence Eclipsing Binary in the Orion Star-Forming Region

    NASA Astrophysics Data System (ADS)

    Cargile, P. A.; Stassun, K. G.; Mathieu, R. D.

    2008-02-01

    We report the discovery of a pre-main-sequence (PMS), low-mass, double-lined, spectroscopic, eclipsing binary in the Orion star-forming region. We present our observations, including radial velocities derived from optical high-resolution spectroscopy, and present an orbit solution that permits the determination of precise empirical masses for both components of the system. We find that Par 1802 is composed of two equal-mass (0.39 +/- 0.03, 0.40 +/- 0.03 M⊙) stars in a circular, 4.7 day orbit. There is strong evidence, such as the system exhibiting strong Li lines and a center-of-mass velocity consistent with cluster membership, that this system is a member of the Orion star-forming region and quite possibly the Orion Nebula Cluster, and therefore has an age of only a few million years. As there are currently only a few empirical mass and radius measurements for low-mass, PMS stars, this system presents an interesting test for the predictions of current theoretical models of PMS stellar evolution.

  10. The importance of nuclear quantum effects in spectral line broadening of optical spectra and electrostatic properties in aromatic chromophores.

    PubMed

    Law, Y K; Hassanali, A A

    2018-03-14

    In this work, we examine the importance of nuclear quantum effects on capturing the line broadening and vibronic structure of optical spectra. We determine the absorption spectra of three aromatic molecules indole, pyridine, and benzene using time dependent density functional theory with several molecular dynamics sampling protocols: force-field based empirical potentials, ab initio simulations, and finally path-integrals for the inclusion of nuclear quantum effects. We show that the absorption spectrum for all these chromophores are similarly broadened in the presence of nuclear quantum effects regardless of the presence of hydrogen bond donor or acceptor groups. We also show that simulations incorporating nuclear quantum effects are able to reproduce the heterogeneous broadening of the absorption spectra even with empirical force fields. The spectral broadening associated with nuclear quantum effects can be accounted for by the broadened distribution of chromophore size as revealed by a particle in the box model. We also highlight the role that nuclear quantum effects have on the underlying electronic structure of aromatic molecules as probed by various electrostatic properties.

  11. The importance of nuclear quantum effects in spectral line broadening of optical spectra and electrostatic properties in aromatic chromophores

    NASA Astrophysics Data System (ADS)

    Law, Y. K.; Hassanali, A. A.

    2018-03-01

    In this work, we examine the importance of nuclear quantum effects on capturing the line broadening and vibronic structure of optical spectra. We determine the absorption spectra of three aromatic molecules indole, pyridine, and benzene using time dependent density functional theory with several molecular dynamics sampling protocols: force-field based empirical potentials, ab initio simulations, and finally path-integrals for the inclusion of nuclear quantum effects. We show that the absorption spectrum for all these chromophores are similarly broadened in the presence of nuclear quantum effects regardless of the presence of hydrogen bond donor or acceptor groups. We also show that simulations incorporating nuclear quantum effects are able to reproduce the heterogeneous broadening of the absorption spectra even with empirical force fields. The spectral broadening associated with nuclear quantum effects can be accounted for by the broadened distribution of chromophore size as revealed by a particle in the box model. We also highlight the role that nuclear quantum effects have on the underlying electronic structure of aromatic molecules as probed by various electrostatic properties.

  12. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    PubMed

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.

  13. Accurate Theoretical Methane Line Lists in the Infrared up to 3000 K and Quasi-continuum Absorption/Emission Modeling for Astrophysical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rey, Michael; Tyuterev, Vladimir G.; Nikitin, Andrei V., E-mail: michael.rey@univ-reims.fr

    Modeling atmospheres of hot exoplanets and brown dwarfs requires high- T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of {sup 12}CH{sub 4} in the infrared range 0–13,400 cm{sup −1} up to T {sub max} = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm{sup −1} and intensity cutoff down to 10{sup −33} cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line positionmore » accuracies of 0.001–0.01 cm{sup −1}. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high- T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.« less

  14. A polychromatic adaption of the Beer-Lambert model for spectral decomposition

    NASA Astrophysics Data System (ADS)

    Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.

    2017-03-01

    We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.

  15. Point- and line-based transformation models for high resolution satellite image rectification

    NASA Astrophysics Data System (ADS)

    Abd Elrahman, Ahmed Mohamed Shaker

    Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)

  16. Soft X-ray-assisted detection method for airborne molecular contaminations (AMCs)

    NASA Astrophysics Data System (ADS)

    Kim, Changhyuk; Zuo, Zhili; Finger, Hartmut; Haep, Stefan; Asbach, Christof; Fissan, Heinz; Pui, David Y. H.

    2015-03-01

    Airborne molecular contaminations (AMCs) represent a wide range of gaseous contaminants in cleanrooms. Due to the unintentional nanoparticle or haze formation as well as doping caused by AMCs, improved monitoring and controlling methods for AMCs are urgent in the semiconductor industry. However, measuring ultra-low concentrations of AMCs in cleanrooms is difficult, especially, behind a gas filter. In this study, a novel detection method for AMCs, which is on-line, economical, and applicable for diverse AMCs, was developed by employing gas-to-particle conversion with soft X-ray, and then measuring the generated nanoparticles. Feasibility study of this method was conducted through the evaluations of granular-activated carbons (GACs), which are widely used AMC filter media. Sulfur dioxide (SO2) was used as an AMC for the feasibility study. Using this method, the ultra-low concentrations of SO2 behind GACs were determined in terms of concentrations of generated sulfuric acid (H2SO4) nanoparticles. By calculating SO2 concentrations from the nanoparticle concentrations using empirical correlation equations between them, remarkable sensitivity of this method to SO2 was shown, down to parts-per-trillions, which are too low to detect using commercial gas sensors. Also, the calculated SO2 concentrations showed good agreement with those measured simultaneously by a commercial SO2 monitor at parts-per-billions.

  17. An Empirical Method for Deriving Grade Equivalence for University Entrance Qualifications: An Application to A Levels and the International Baccalaureate

    ERIC Educational Resources Information Center

    Green, Francis; Vignoles, Anna

    2012-01-01

    We present a method to compare different qualifications for entry to higher education by studying students' subsequent performance. Using this method for students holding either the International Baccalaureate (IB) or A-levels gaining their degrees in 2010, we estimate an "empirical" equivalence scale between IB grade points and UCAS…

  18. An Empirical Method Permitting Rapid Determination of the Area, Rate and Distribution of Water-Drop Impingement on an Airfoil of Arbitrary Section at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Bergrun, N. R.

    1951-01-01

    An empirical method for the determination of the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The procedure represents an initial step toward the development of a method which is generally applicable in the design of thermal ice-prevention equipment for airplane wing and tail surfaces. Results given by the proposed empirical method are expected to be sufficiently accurate for the purpose of heated-wing design, and can be obtained from a few numerical computations once the velocity distribution over the airfoil has been determined. The empirical method presented for incompressible flow is based on results of extensive water-drop. trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer. The method developed for incompressible flow is extended to the calculation of area and rate of impingement on straight wings in subsonic compressible flow to indicate the probable effects of compressibility for airfoils at low subsonic Mach numbers.

  19. Memory, reasoning, and categorization: parallels and common mechanisms

    PubMed Central

    Hayes, Brett K.; Heit, Evan; Rotello, Caren M.

    2014-01-01

    Traditionally, memory, reasoning, and categorization have been treated as separate components of human cognition. We challenge this distinction, arguing that there is broad scope for crossover between the methods and theories developed for each task. The links between memory and reasoning are illustrated in a review of two lines of research. The first takes theoretical ideas (two-process accounts) and methodological tools (signal detection analysis, receiver operating characteristic curves) from memory research and applies them to important issues in reasoning research: relations between induction and deduction, and the belief bias effect. The second line of research introduces a task in which subjects make either memory or reasoning judgments for the same set of stimuli. Other than broader generalization for reasoning than memory, the results were similar for the two tasks, across a variety of experimental stimuli and manipulations. It was possible to simultaneously explain performance on both tasks within a single cognitive architecture, based on exemplar-based comparisons of similarity. The final sections explore evidence for empirical and processing links between inductive reasoning and categorization and between categorization and recognition. An important implication is that progress in all three of these fields will be expedited by further investigation of the many commonalities between these tasks. PMID:24987380

  20. Memory, reasoning, and categorization: parallels and common mechanisms.

    PubMed

    Hayes, Brett K; Heit, Evan; Rotello, Caren M

    2014-01-01

    Traditionally, memory, reasoning, and categorization have been treated as separate components of human cognition. We challenge this distinction, arguing that there is broad scope for crossover between the methods and theories developed for each task. The links between memory and reasoning are illustrated in a review of two lines of research. The first takes theoretical ideas (two-process accounts) and methodological tools (signal detection analysis, receiver operating characteristic curves) from memory research and applies them to important issues in reasoning research: relations between induction and deduction, and the belief bias effect. The second line of research introduces a task in which subjects make either memory or reasoning judgments for the same set of stimuli. Other than broader generalization for reasoning than memory, the results were similar for the two tasks, across a variety of experimental stimuli and manipulations. It was possible to simultaneously explain performance on both tasks within a single cognitive architecture, based on exemplar-based comparisons of similarity. The final sections explore evidence for empirical and processing links between inductive reasoning and categorization and between categorization and recognition. An important implication is that progress in all three of these fields will be expedited by further investigation of the many commonalities between these tasks.

  1. Probing the Galactic Structure of the Milky Way with H II Regions

    NASA Astrophysics Data System (ADS)

    Red, Wesley Alexander; Wenger, Trey V.; Balser, Dana; Anderson, Loren; Bania, Thomas

    2018-01-01

    Mapping the structure of the Milky Way is challenging since we reside within the Galactic disk and distances are difficult to determine. Elemental abundances provide important constraints on theories of the formation and evolution of the Milky Way. HII regions are the brightest objects in the Galaxy at radio wavelengths and are detected across the entire Galactic disk. We use the Jansky Very Large Array (VLA) to observe the radio recombination line (RRL) and continuum emission of 120 Galactic HII regions located across the Galactic disk. In thermal equilibrium, metal abundances are expected to set the nebular electron temperature with high abundances producing low temperatures. We derive the metallicity of HII regions using an empirical relation between an HII region's radio recombination line-to-continuum ratio and nebular metallicity. Here we focus on a subset of 20 HII regions from our sample that have been well studied with the Green Bank Telescope (GBT) to test our data reduction pipeline and analysis methods. Our goal is to expand this study to the Southern skies with the Australia Telescope Compact Array and create a metallicity map of the entire Galactic disk.

  2. The line-locking hypothesis, absorption by intervening galaxies, and the z = 1.95 peak in redshifts

    NASA Technical Reports Server (NTRS)

    Burbidge, G.

    1978-01-01

    The controversy over whether the absorption spectrum in QSOs is intrinsic or extrinsic is approached with attention to the peak of redshifts at z = 1.95. Also considered are the line-locking and the intervening galaxy hypotheses. The line locking hypothesis is based on observations that certain ratios found in absorption line QSOs are preferred, and leads inevitably to the conclusion that the absorption line systems are intrinsic. The intervening galaxy hypothesis is based on absorption redshifts resulting from given absorption cross-sections of galactic clusters and the intergalactic medium, and would lead to the theoretical conclusion that most QSOs show strong absorption, a conclusion which is not supported by empirical data. The 1.95 peak, on the other hand, is most probably an intrinsic property of QSOs. The peak is enhanced by redshift, and it is noted that both an emission and an absorption redshift peak are seen at 1.95.

  3. An Ab Initio Based Potential Energy Surface for Water

    NASA Technical Reports Server (NTRS)

    Partridge, Harry; Schwenke, David W.; Langhoff, Stephen R. (Technical Monitor)

    1996-01-01

    We report a new determination of the water potential energy surface. A high quality ab initio potential energy surface (PES) and dipole moment function of water have been computed. This PES is empirically adjusted to improve the agreement between the computed line positions and those from the HITRAN 92 data base. The adjustment is small, nonetheless including an estimate of core (oxygen 1s) electron correlation greatly improves the agreement with experiment. Of the 27,245 assigned transitions in the HITRAN 92 data base for H2(O-16), the overall root mean square (rms) deviation between the computed and observed line positions is 0.125/cm. However the deviations do not correspond to a normal distribution: 69% of the lines have errors less than 0.05/cm. Overall, the agreement between the line intensities computed in the present work and those contained in the data base is quite good, however there are a significant number of line strengths which differ greatly.

  4. Hunting for extremely metal-poor emission-line galaxies in the Sloan Digital Sky Survey: MMT and 3.5 m APO observations

    NASA Astrophysics Data System (ADS)

    Izotov, Y. I.; Thuan, T. X.; Guseva, N. G.

    2012-10-01

    We present 6.5-m MMT and 3.5 m APO spectrophotometry of 69 H ii regions in 42 low-metallicity emission-line galaxies, selected from the data release 7 of the Sloan Digital Sky Survey to have mostly [O iii]λ4959/Hβ ≲ 1 and [N ii]λ6583/Hβ ≲ 0.1. The electron temperature-sensitive emission line [O iii] λ4363 is detected in 53 H ii regions allowing a direct abundance determination. The oxygen abundance in the remaining 16 H ii regions is derived using a semi-empirical method. The oxygen abundance of the galaxies in our sample ranges from 12 + log O/H ~ 7.1 to ~7.9, with 14 H ii regions in 7 galaxies with 12 + log O/H ≤ 7.35. In 5 of the latter galaxies, the oxygen abundance is derived here for the first time. Including other known extremely metal-deficient emission-line galaxies from the literature, e.g. SBS 0335-052W, SBS 0335-052E and I Zw 18, we have compiled a sample of the 17 most metal-deficient (with 12 + log O/H ≤ 7.35) emission-line galaxies known in the local universe. There appears to be a metallicity floor at 12 + log O/H ~ 6.9, suggesting that the matter from which dwarf emission-line galaxies formed was pre-enriched to that level by e.g. Population III stars. Based on observations with the Multiple Mirror telescope (MMT) and the 3.5 m Apache Point Observatory (APO). The MMT is operated by the MMT Observatory (MMTO), a joint venture of the Smithsonian Institution and the University of Arizona. The Apache Point Observatory 3.5-m telescope is owned and operated by the Astrophysical Research Consortium.Figures 1-3 and Tables 2-8 are available in electronic form at http://www.aanda.org

  5. Determination of the line shapes of atomic nitrogen resonance lines by magnetic scans

    NASA Technical Reports Server (NTRS)

    Lawrence, G. M.; Stone, E. J.; Kley, D.

    1976-01-01

    A technique is given for calibrating an atomic nitrogen resonance lamp for use in determining column densities of atoms in specific states. A discharge lamp emitting the NI multiplets at 1200 A and 1493 A is studied by obtaining absorption by atoms in a magnetic field (0-2.5 T). This magnetic scanning technique enables the determination of the absorbing atom column density, and an empirical curve of growth is obtained because the atomic f-value is known. Thus, the calibrated lamp can be used in the determination of atomic column densities.

  6. RNAi Screening in Spodoptera frugiperda.

    PubMed

    Ghosh, Subhanita; Singh, Gatikrushna; Sachdev, Bindiya; Kumar, Ajit; Malhotra, Pawan; Mukherjee, Sunil K; Bhatnagar, Raj K

    2016-01-01

    RNA interference is a potent and precise reverse genetic approach to carryout large-scale functional genomic studies in a given organism. During the past decade, RNAi has also emerged as an important investigative tool to understand the process of viral pathogenesis. Our laboratory has successfully generated transgenic reporter and RNAi sensor line of Spodoptera frugiperda (Sf21) cells and developed a reversal of silencing assay via siRNA or shRNA guided screening to investigate RNAi factors or viral pathogenic factors with extraordinary fidelity. Here we describe empirical approaches and conceptual understanding to execute successful RNAi screening in Spodoptera frugiperda 21-cell line.

  7. Cultural transmission of civic attitudes.

    PubMed

    Miles-Touya, Daniel; Rossi, Máximo

    2016-01-01

    In this empirical paper we attempt to measure the separate influence on civic engagement of educational attainment and cultural transmission of civic attitudes. Unlike most of the previous empirical works on this issue, we are able to approximate the cultural transmission of civic attitudes. We observe that civic returns to education are overstated when the transmission of civic attitudes is ignored. Moreover, the transmission of civic attitudes significantly enhances civic involvement and reinforces civic returns to education. Our findings are in line with the proposals of civic virtue theorists or grass movements who suggest that citizenship education should be included in the compulsory school curricula since, if not, families or local communities will only transmit their particular view of the world.

  8. Influence of strain on dislocation core in silicon

    NASA Astrophysics Data System (ADS)

    Pizzagalli, L.; Godet, J.; Brochard, S.

    2018-05-01

    First principles, density functional-based tight binding and semi-empirical interatomic potentials calculations are performed to analyse the influence of large strains on the structure and stability of a 60? dislocation in silicon. Such strains typically arise during the mechanical testing of nanostructures like nanopillars or nanoparticles. We focus on bi-axial strains in the plane normal to the dislocation line. Our calculations surprisingly reveal that the dislocation core structure largely depends on the applied strain, for strain levels of about 5%. In the particular case of bi-axial compression, the transformation of the dislocation to a locally disordered configuration occurs for similar strain magnitudes. The formation of an opening, however, requires larger strains, of about 7.5%. Furthermore, our results suggest that electronic structure methods should be favoured to model dislocation cores in case of large strains whenever possible.

  9. Hilbert-Huang Spectrum as a new field for the identification of EEG event related de-/synchronization for BCI applications.

    PubMed

    Panoulas, Konstantinos I; Hadjileontiadis, Leontios J; Panas, Stavros M

    2008-01-01

    Brain Computer Interfaces (BCI) usually utilize the suppression of mu-rhythm during actual or imagined motor activity. In order to create a BCI system, a signal processing method is required to extract features upon which the discrimination is based. In this article, the Empirical Mode Decomposition along with the Hilbert-Huang Spectrum (HHS) is found to contain the necessary information to be considered as an input to a discriminator. Also, since the HHS defines amplitude and instantaneous frequency for each sample, it can be used for an online BCI system. Experimental results when the HHS applied to EEG signals from an on-line database (BCI Competition III) show the potentiality of the proposed analysis to capture the imagined motor activity, contributing to a more enhanced BCI performance.

  10. Ab Initio and Improved Empirical Potentials for the Calculation of the Anharmonic Vibrational States and Intramolecular Mode Coupling of N-Methylacetamide

    NASA Technical Reports Server (NTRS)

    Gregurick, Susan K.; Chaban, Galina M.; Gerber, R. Benny; Kwak, Dochou (Technical Monitor)

    2001-01-01

    The second-order Moller-Plesset ab initio electronic structure method is used to compute points for the anharmonic mode-coupled potential energy surface of N-methylacetamide (NMA) in the trans(sub ct) configuration, including all degrees of freedom. The vibrational states and the spectroscopy are directly computed from this potential surface using the Correlation Corrected Vibrational Self-Consistent Field (CC-VSCF) method. The results are compared with CC-VSCF calculations using both the standard and improved empirical Amber-like force fields and available low temperature experimental matrix data. Analysis of our calculated spectroscopic results show that: (1) The excellent agreement between the ab initio CC-VSCF calculated frequencies and the experimental data suggest that the computed anharmonic potentials for N-methylacetamide are of a very high quality; (2) For most transitions, the vibrational frequencies obtained from the ab initio CC-VSCF method are superior to those obtained using the empirical CC-VSCF methods, when compared with experimental data. However, the improved empirical force field yields better agreement with the experimental frequencies as compared with a standard AMBER-type force field; (3) The empirical force field in particular overestimates anharmonic couplings for the amide-2 mode, the methyl asymmetric bending modes, the out-of-plane methyl bending modes, and the methyl distortions; (4) Disagreement between the ab initio and empirical anharmonic couplings is greater than the disagreement between the frequencies, and thus the anharmonic part of the empirical potential seems to be less accurate than the harmonic contribution;and (5) Both the empirical and ab initio CC-VSCF calculations predict a negligible anharmonic coupling between the amide-1 and other internal modes. The implication of this is that the intramolecular energy flow between the amide-1 and the other internal modes may be smaller than anticipated. These results may have important implications for the anharmonic force fields of peptides, for which N-methylacetamide is a model.

  11. A New and Fast Method for Smoothing Spectral Imaging Data

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Liu, Ming; Davis, Curtiss O.

    1998-01-01

    The Airborne Visible Infrared Imaging Spectrometer (AVIRIS) acquires spectral imaging data covering the 0.4 - 2.5 micron wavelength range in 224 10-nm-wide channels from a NASA ER-2 aircraft at 20 km. More than half of the spectral region is affected by atmospheric gaseous absorption. Over the past decade, several techniques have been used to remove atmospheric effects from AVIRIS data for the derivation of surface reflectance spectra. An operational atmosphere removal algorithm (ATREM), which is based on theoretical modeling of atmospheric absorption and scattering effects, has been developed and updated for deriving surface reflectance spectra from AVIRIS data. Due to small errors in assumed wavelengths and errors in line parameters compiled on the HITRAN database, small spikes (particularly near the centers of the 0.94- and 1.14-micron water vapor bands) are present in this spectrum. Similar small spikes are systematically present in entire ATREM output cubes. These spikes have distracted geologists who are interested in studying surface mineral features. A method based on the "global" fitting of spectra with low order polynomials or other functions for removing these weak spikes has recently been developed by Boardman (this volume). In this paper, we describe another technique, which fits spectra "locally" based on cubic spline smoothing, for quick post processing of ATREM apparent reflectance spectra derived from AVIRIS data. Results from our analysis of AVIRIS data acquired over Cuprite mining district in Nevada in June of 1995 are given. Comparisons between our smoothed spectra and those derived with the empirical line method are presented.

  12. Empirical Data Collection and Analysis Using Camtasia and Transana

    ERIC Educational Resources Information Center

    Thorsteinsson, Gisli; Page, Tom

    2009-01-01

    One of the possible techniques for collecting empirical data is video recordings of a computer screen with specific screen capture software. This method for collecting empirical data shows how students use the BSCWII (Be Smart Cooperate Worldwide--a web based collaboration/groupware environment) to coordinate their work and collaborate in…

  13. Empirical Evidence or Intuition? An Activity Involving the Scientific Method

    ERIC Educational Resources Information Center

    Overway, Ken

    2007-01-01

    Students need to have basic understanding of scientific method during their introductory science classes and for this purpose an activity was devised which involved a game based on famous Monty Hall game problem. This particular activity allowed students to banish or confirm their intuition based on empirical evidence.

  14. A Comparison of Approaches for Setting Proficiency Standards.

    ERIC Educational Resources Information Center

    Koffler, Stephen L.

    This research compared the cut-off scores estimated from an empirical procedure (Contrasting group method) to those determined from a more theoretical process (Nedelsky method). A methodological and statistical framework was also provided for analysis of the data to obtain the most appropriate standard using the empirical procedure. Data were…

  15. Undergraduate Instruction in Empirical Research Methods in Communication: Assessment and Recommendations

    ERIC Educational Resources Information Center

    Parks, Malcolm R.; Faw, Meara; Goldsmith, Daena

    2011-01-01

    This study assesses the current state of undergraduate instruction in empirical research methods in communication and offers recommendations for enhancing such instruction. Responses to an online questionnaire were received from 149 communication-related programs at four-year colleges and universities. Just over 85% of responding programs offered…

  16. The Impact of Collaboration, Empowerment, and Choice: An Empirical Examination of the Collaborative Course Development Method

    ERIC Educational Resources Information Center

    Aiken, K. Damon; Heinze, Timothy C.; Meuter, Matthew L.; Chapman, Kenneth J.

    2017-01-01

    This research empirically tests collaborative course development (CCD)-a pedagogy presented in the 2016 "Marketing Education Review Special Issue on Teaching Innovations". A team of researchers taught experimental courses using CCD methods (employing various techniques including syllabus building, "flex-tures," free-choice…

  17. Sensitivity of ab Initio vs Empirical Methods in Computing Structural Effects on NMR Chemical Shifts for the Example of Peptides.

    PubMed

    Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian

    2014-01-14

    The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.

  18. Atmospheric transmission of CO2 laser radiation with application to laser Doppler systems

    NASA Technical Reports Server (NTRS)

    Murty, S. S. R.

    1975-01-01

    The molecular absorption coefficients of carbon dioxide, water vapor, and nitrous oxide are calculated at the P16, P18, P20, P22, and P24 lines of the CO2 laser for temperatures from 200 to 300 K and for pressures from 100 to 1100 mb. The temperature variation of the continuum absorption coefficient of water vapor is taken into account semi-empirically from Burch's data. The total absorption coefficient from the present calculations falls within + or - 20 percent of the results of McClatchey and Selby. The transmission loss which the CO2 pulsed laser Doppler system experiences was calculated for flight test conditions for the five P-lines. The total transmission loss is approximately 7 percent higher at the P16 line and 10 percent lower at the P24 line compared to the P20 line. Comparison of the CO2 laser with HF and DF laser transmission reveals the P2(8) line at 3.8 micrometers of the DF laser is much better from the transmission point of view for altitudes below 10 km.

  19. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    PubMed

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  20. Syllabification of Final Consonant Clusters: A Salient Pronunciation Problem of Kurdish EFL Learners

    ERIC Educational Resources Information Center

    Keshavarz, Mohammad Hossein

    2017-01-01

    While there is a plethora of research on pronunciation problems of EFL learners with different L1 backgrounds, published empirical studies on syllabification errors of Iraqi Kurdish EFL learners are scarce. Therefore, to contribute to this line of research, the present study set out to investigate difficulties of this group of learners in the…

  1. Innovative Education? A Test of Specialist Mimicry or Generalist Assimilation in Trends in Charter School Specialization over Time

    ERIC Educational Resources Information Center

    Renzulli, Linda A.; Barr, Ashley B.; Paino, Maria

    2015-01-01

    By most media accounts, charter schools are innovative schools. But little empirical work interrogates this idea. We examine the growth and decline of specialist charter school mission statements as one indicator of innovation. In line with theories of resource partitioning, we find that specialist charter school missions--those asserting…

  2. Labor Market Frictions and Production Efficiency in Public Schools. Working Paper 163

    ERIC Educational Resources Information Center

    Kim, Dongwoo; Koedel, Cory; Ni, Shawn; Podgursky, Michael

    2016-01-01

    State-specific licensing policies and pension plans create mobility costs for educators who cross state lines. We empirically test whether these costs affect production in schools--a hypothesis that follows directly from economic theory on labor frictions--using geo-coded data from the lower-48 states. We find that achievement is lower in…

  3. Reversing Language Shift: Theoretical and Empirical Foundations of Assistance to Threatened Languages. Multilingual Matters 76.

    ERIC Educational Resources Information Center

    Fishman, Joshua A.

    On the basis of detailed analyses of 10 threatened language-in-society constellations and three formerly endangered but now secure constellations, this book develops a closely argued theory of worldwide efforts on behalf of reversing language shift (RLS). It also applies this same line of reasoning to the problems of maintaining the…

  4. Logical Functional Analysis in the Assessment and Treatment of Eating Disorders

    ERIC Educational Resources Information Center

    Ghaderi, Ata

    2007-01-01

    Cognitive behaviour therapy (CBT) is now suggested to be the treatment of choice for bulimia nervosa. However, it is also known than no more than approximately 50% of patients recover after receiving CBT. When the first-line manual-based treatment fails, the therapist should use other empirically supported treatments, and if they do not work or…

  5. Development and Validation of the Child Post-Traumatic Cognitions Inventory (CPTCI)

    ERIC Educational Resources Information Center

    Meiser-Stedman, Richard; Smith, Patrick; Bryant, Richard; Salmon, Karen; Yule, William; Dalgleish, Tim; Nixon, Reginald D. V.

    2009-01-01

    Background: Negative trauma-related cognitions have been found to be a significant factor in the maintenance of post-traumatic stress disorder (PTSD) in adults. Initial studies of such appraisals in trauma-exposed children and adolescents suggest that this is an important line of research in youth, yet empirically validated measures for use with…

  6. Training Addiction Counselors to Implement an Evidence-Based Intervention: Strategies for Increasing Organizational and Provider Acceptance

    ERIC Educational Resources Information Center

    Woo, Stephanie M.; Hepner, Kimberly A.; Gilbert, Elizabeth A.; Osilla, Karen Chan; Hunter, Sarah B.; Munoz, Ricardo F.; Watkins, Katherine E.

    2013-01-01

    One barrier to widespread public access to empirically supported treatments (ESTs) is the limited availability and high cost of professionals trained to deliver them. Our earlier work from 2 clinical trials demonstrated that front-line addiction counselors could be trained to deliver a manualized, group-based cognitive behavioral therapy (GCBT)…

  7. Who's Afraid Now? Reconstructing Canadian Citizenship Education through Transdisciplinarity

    ERIC Educational Resources Information Center

    Mitchell, Richard C.

    2010-01-01

    Viewed through the lenses of the United Nations Convention on the Rights of the Child (CRC), this article critically evaluates the growing controversy surrounding the teaching of human rights in Canada. In line with critiques and with previous empirical studies on the implementation of the United Nations (UN) Convention on the Rights of the Child…

  8. Text-Based On-Line Conferencing: A Conceptual and Empirical Analysis Using a Minimal Prototype.

    ERIC Educational Resources Information Center

    McCarthy, John C.; And Others

    1993-01-01

    Analyzes requirements for text-based online conferencing through the use of a minimal prototype. Topics discussed include prototyping with a minimal system; text-based communication; the system as a message passer versus the system as a shared data structure; and three exercises that showed how users worked with the prototype. (Contains 61…

  9. Racial Threat, Suspicion, and Police Behavior: The Impact of Race and Place in Traffic Enforcement

    ERIC Educational Resources Information Center

    Novak, Kenneth J.; Chamlin, Mitchell B.

    2012-01-01

    Racial bias in traffic enforcement has become a popular line of inquiry, but examinations into explanations for the disparity have been scant. The current research integrates theoretical insights from the racial threat hypothesis with inferences drawn from the empirical analyses of the factors that stimulate officer suspicion. The most intriguing…

  10. Exploring Pedagogical Leadership in Early Years Education in Saudi Arabia

    ERIC Educational Resources Information Center

    Alameen, Lubna; Male, Trevor; Palaiologou, Ioanna

    2015-01-01

    The empirical research for this paper was undertaken with leaders of early years setting in the Kingdom of Saudi Arabia (KSA). The investigation sought to establish to what extent it was possible to behave in line with the concept of pedagogical leadership in the twenty-first century in an Arab Muslim monarchy, dominated by Islam, where directive…

  11. Reading for Repetition and Reading for Translation: Do They Involve the Same Processes?

    ERIC Educational Resources Information Center

    Macizo, Pedro; Bajo, M. Teresa

    2006-01-01

    Theories of translation differ in the role assigned to the reformulation process. One view, the ''horizontal'' approach, considers that translation involves on-line searches for matches between linguistic entries in the two languages involved [Gerver, D. (1976). Empirical studies of simultaneous interpretation: A review and a model. In R. W.…

  12. Channel Effects and Non-Verbal Properties of Media Messages: A State of the Art Review.

    ERIC Educational Resources Information Center

    McCain, Thomas A.; White, Sylvia

    The purposes of this paper are to compile and describe the published empirical studies that have examined nonverbal visual production variables, to offer a critique of the lines of inquiry, and to suggest some areas for continued research. The studies are presented in two major sections: intravisual and intermedia. The intravisual section…

  13. Researching Design Practices and Design Cognition: Contexts, Experiences and Pedagogical Knowledge-in-Pieces

    ERIC Educational Resources Information Center

    Kali, Yael; Goodyear, Peter; Markauskaite, Lina

    2011-01-01

    If research and development in the field of learning design is to have a serious and sustained impact on education, then technological innovation needs to be accompanied--and probably guided--by good empirical studies of the design practices and design thinking of those who develop these innovations. This article synthesises two related lines of…

  14. Interdisciplinary Research on Education and Its Disciplines: Processes of Change and Lines of Conflict in Unstable Academic Expert Cultures: Germany as an Example

    ERIC Educational Resources Information Center

    Terhart, Ewald

    2017-01-01

    This article discusses problems of reconstructing the recent development in the field of empirical research on education ("empirische Bildungsforschung"), especially problems resulting from its interdisciplinary character, its divergent institutional contexts and its multimethod approach. The article looks at the position of various…

  15. Prosodic Markers of Saliency in Humorous Narratives

    ERIC Educational Resources Information Center

    Pickering, Lucy; Corduas, Marcella; Eisterhold, Jodi; Seifried, Brenna; Eggleston, Alyson; Attardo, Salvatore

    2009-01-01

    Much of what we think we know about the performance of humor relies on our intuitions about prosody (e.g., "it's all about timing"); however, this has never been empirically tested. Thus, the central question addressed in this article is whether speakers mark punch lines in jokes prosodically and, if so, how. To answer this question,…

  16. An empirical NaKCa geothermometer for natural waters

    USGS Publications Warehouse

    Fournier, R.O.; Truesdell, A.H.

    1973-01-01

    An empirical method of estimating the last temperature of water-rock interaction has been devised. It is based upon molar Na, K and Ca concentrations in natural waters from temperature environments ranging from 4 to 340??C. The data for most geothermal waters cluster near a straight line when plotted as the function log ( Na K) + ?? log [ ??? (Ca) Na] vs reciprocal of absolute temperature, where ?? is either 1 3 or 4 3 depending upon whether the water equilibrated above or below 100??C. For most waters tested, the method gives better results than the Na K methods suggested by other workers. The ratio Na K should not be used to estimate temperature if ??? ( MCa) MNa is greater than 1. The Na K values of such waters generally yield calculated temperatures much higher than the actual temperature at which water interacted with the rock. A comparison of the composition of boiling hot-spring water with that obtained from a nearby well (170??C) in Yellowstone Park shows that continued water-rock reactions may occur during ascent of water even though that ascent is so rapid that little or no heat is lost to the country rock, i.e. the water cools adiabatically. As a result of such continued reaction, waters which dissolve additional Ca as they ascend from the aquifer to the surface will yield estimated aquifer temperatures that are too low. On the other hand, waters initially having enough Ca to deposit calcium carbonate during ascent may yield estimated aquifer temperatures that are too high if aqueous Na and K are prevented from further reaction with country rock owing to armoring by calcite or silica minerals. The Na-K-Ca geothermometer is of particular interest to those prospecting for geothermal energy. The method also may be of use in interpreting compositions of fluid inclusions. ?? 1973.

  17. Calibrating the mental number line.

    PubMed

    Izard, Véronique; Dehaene, Stanislas

    2008-03-01

    Human adults are thought to possess two dissociable systems to represent numbers: an approximate quantity system akin to a mental number line, and a verbal system capable of representing numbers exactly. Here, we study the interface between these two systems using an estimation task. Observers were asked to estimate the approximate numerosity of dot arrays. We show that, in the absence of calibration, estimates are largely inaccurate: responses increase monotonically with numerosity, but underestimate the actual numerosity. However, insertion of a few inducer trials, in which participants are explicitly (and sometimes misleadingly) told that a given display contains 30 dots, is sufficient to calibrate their estimates on the whole range of stimuli. Based on these empirical results, we develop a model of the mapping between the numerical symbols and the representations of numerosity on the number line.

  18. The Rise and Fall of Andean Empires: El Nino History Lessons.

    ERIC Educational Resources Information Center

    Wright, Kenneth R.

    2000-01-01

    Provides information on El Nino and the methods for investigating ancient climate record. Traces the rise and fall of the Andean empires focusing on the climatic forces that each empire (Tiwanaku, Wari, Moche, and Inca) endured. States that modern societies should learn from the experiences of these ancient civilizations. (CMK)

  19. SEMI-EMPIRICAL MODELING OF THE PHOTOSPHERE, CHROMOPSHERE, TRANSITION REGION, AND CORONA OF THE M-DWARF HOST STAR GJ 832

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontenla, J. M.; Linsky, Jeffrey L.; Witbrod, Jesse

    Stellar radiation from X-rays to the visible provides the energy that controls the photochemistry and mass loss from exoplanet atmospheres. The important extreme ultraviolet (EUV) region (10–91.2 nm) is inaccessible and should be computed from a reliable stellar model. It is essential to understand the formation regions and physical processes responsible for the various stellar emission features to predict how the spectral energy distribution varies with age and activity levels. We compute a state-of-the-art semi-empirical atmospheric model and the emergent high-resolution synthetic spectrum of the moderately active M2 V star GJ 832 as the first of a series of modelsmore » for stars with different activity levels. We construct a one-dimensional simple model for the physical structure of the star’s chromosphere, chromosphere-corona transition region, and corona using non-LTE radiative transfer techniques and many molecular lines. The synthesized spectrum for this model fits the continuum and lines across the UV-to-optical spectrum. Particular emphasis is given to the emission lines at wavelengths that are shorter than 300 nm observed with the Hubble Space Telescope , which have important effects on the photochemistry of the exoplanet atmospheres. The FUV line ratios indicate that the transition region of GJ 832 is more biased to hotter material than that of the quiet Sun. The excellent agreement of our computed EUV luminosity with that obtained by two other techniques indicates that our model predicts reliable EUV emission from GJ 832. We find that the unobserved EUV flux of GJ 832, which heats the outer atmospheres of exoplanets and drives their mass loss, is comparable to the active Sun.« less

  20. Empirical linelist of 13CH4 at 1.67 micron with lower state energies using intensities at 296 and 81 K

    NASA Astrophysics Data System (ADS)

    Lyulin, O. M.; Kassi, S.; Campargue, A.; Sung, K.; Brown, L. R.

    2010-04-01

    The high resolution absorption spectra of 13CH4 were recorded at 81 K by differential absorption spectroscopy using a cryogenic cell and a series of Distributed Feed Back (DFB) diode lasers and at room temperature by Fourier transform spectroscopy. The investigated spectral region corresponds to the 13CH4 tetradecad containing 2nu3 near 5988 cm-1. Empirical linelists were constructed for 1629 transitions at 81 K (5852-6124 cm-1) and for 3488 features at room temperature (5850 - 6150 cm-1); the smallest observed intensity was 3×10-26 cm/molecule at 81 K. The lower state energy values were derived for 1208 13CH4 transitions using line intensities at 81 K and 296 K. Over 400 additional features were seen only at 81 K. The quality of the resulting empirical low energy values is demonstrated by the excellent agreement with the already-assigned transitions and the clear propensity of the empirical low J values to be close to integers. The two line lists at 81 K and at 296 K provided as Supplementary Material will enable future theoretical analyses of the upper 13CH4 tetradecad. Acknowledgements O.M. L. (IAO, Tomsk) is grateful to the French Embassy in Moscow for a two months visiting support at Grenoble University. This work is part of the ANR project "CH4@Titan" (ref: BLAN08-2_321467). The supports by RFBR (Grant RFBR 09-05-92508-ИК_а), CRDF (grant RUG1-2954-TO-09) and by the Groupement de Recherche International SAMIA between CNRS (France), RFBR (Russia) and CAS (China) is acknowledged. Part of the research described in this paper was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  1. The place for itraconazole in treatment.

    PubMed

    Maertens, Johan; Boogaerts, Marc

    2005-09-01

    The incidence of systemic fungal infections has risen sharply in the last two decades, reflecting a rise in the number of patients who are predisposed to these diseases because they are immunosuppressed or immunocompromised. The growing use of intensive chemotherapy to treat cancer, highly immunosuppressive drug regimens (not only in transplant recipients), widespread prophylactic or empirical broad-spectrum antibiotics, prolonged parenteral nutrition, long-term indwelling lines, improved survival in neonatal and other intensive care units, together with the AIDS epidemic have led to an upsurge in the number of patients at risk. In addition, there have been changes in the epidemiology of systemic fungal infections, with Aspergillus spp. and Candida spp. other than Candida albicans becoming increasingly common causes. These changes have affected the selection of drugs for first-line or prophylactic use, as not all agents have the critical spectrum of activity required. The management of systemic fungal infections can be divided into four main strategies: prophylaxis, early empirical use, pre-emptive and definite therapy. Antifungal prophylaxis is given based on the patient risk factors, but in the absence of infection. Empirical antifungal therapy is given in patients at risk with signs of infection of unclear aetiology (usually persistent fever) but of possible fungal origin. Therapy is given pre-emptively in patients at risk with additional evidence for the presence of an infective agent in a way predisposing for infection (e.g. Aspergillus colonization; high Candida colonization index). Finally, definite treatment is used in patients with confirmed fungal infection. The distinction between risk-adapted prophylaxis, early empirical therapy, and pre-emptive use of antifungals often becomes unclear and clinical decision making depends largely on local epidemiology and resistance patterns, adequate definition of patient risk categories, early diagnosis and the calculation of cost-benefit ratios. This article addresses the use of itraconazole in the treatment of invasive fungal infections in the haematology patient.

  2. VLT/X-shooter observations of the low-metallicity blue compact dwarf galaxy PHL 293B including a luminous blue variable star

    NASA Astrophysics Data System (ADS)

    Izotov, Y. I.; Guseva, N. G.; Fricke, K. J.; Henkel, C.

    2011-09-01

    Context. We present VLT/X-shooter spectroscopic observations in the wavelength range λλ3000-23 000 Å of the extremely metal-deficient blue compact dwarf (BCD) galaxy PHL 293B containing a luminous blue variable (LBV) star and compare them with previous data. Aims: This BCD is one of the two lowest-metallicity galaxies where LBV stars were detected, allowing us to study the LBV phenomenon in the extremely low metallicity regime. Methods: We determine abundances of nitrogen, oxygen, neon, sulfur, argon, and iron by analyzing the fluxes of narrow components of the emission lines using empirical methods and study the properties of the LBV from the fluxes and widths of broad emission lines. Results: We derive an interstellar oxygen abundance of 12+log O/H = 7.71 ± 0.02, which is in agreement with previous determinations. The observed fluxes of narrow Balmer, Paschen and Brackett hydrogen lines correspond to the theoretical recombination values after correction for extinction with a single value C(Hβ) = 0.225. This implies that the star-forming region observed in the optical range is the only source of ionisation and there is no additional source of ionisation that is seen in the NIR range but is hidden in the optical range. We detect three v = 1-0 vibrational lines of molecular hydrogen. Their flux ratios and non-detection of v = 2-1 and 3-1 emission lines suggest that collisional excitation is the main source producing H2 lines. For the LBV star in PHL 293B we find broad emission with P Cygni profiles in several Balmer hydrogen emission lines and for the first time in several Paschen hydrogen lines and in several He i emission lines, implying temporal evolution of the LBV on a time scale of 8 years. The Hα luminosity of the LBV star is by one order of magnitude higher than the one obtained for the LBV star in NGC 2363 ≡ Mrk 71 which has a slightly higher metallicity 12+logO/H = 7.87. The terminal velocity of the stellar wind in the low-metallicity LBV of PHL293B is high, ~800 km s-1, and is comparable to that seen in spectra of some extragalactic LBVs during outbursts. We find that the averaged terminal velocities derived from the Paschen and He i emission lines are by some ~40-60 km s-1 lower than those derived from the Balmer emission lines. This probably indicates the presence of the wind accelerating outward. Based on observations collected at the European Southern Observatory, Chile, ESO program 60.A-9442(A).The reduced data in Figures 1 and 2 are available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/533/A25

  3. System and methods for determining masking signals for applying empirical mode decomposition (EMD) and for demodulating intrinsic mode functions obtained from application of EMD

    DOEpatents

    Senroy, Nilanjan [New Delhi, IN; Suryanarayanan, Siddharth [Littleton, CO

    2011-03-15

    A computer-implemented method of signal processing is provided. The method includes generating one or more masking signals based upon a computed Fourier transform of a received signal. The method further includes determining one or more intrinsic mode functions (IMFs) of the received signal by performing a masking-signal-based empirical mode decomposition (EMD) using the at least one masking signal.

  4. Empirical Bayes methods for smoothing data and for simultaneous estimation of many parameters.

    PubMed Central

    Yanagimoto, T; Kashiwagi, N

    1990-01-01

    A recent successful development is found in a series of innovative, new statistical methods for smoothing data that are based on the empirical Bayes method. This paper emphasizes their practical usefulness in medical sciences and their theoretically close relationship with the problem of simultaneous estimation of parameters, depending on strata. The paper also presents two examples of analyzing epidemiological data obtained in Japan using the smoothing methods to illustrate their favorable performance. PMID:2148512

  5. Development of a Coordinate Transformation method for direct georeferencing in map projection frames

    NASA Astrophysics Data System (ADS)

    Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao

    2013-03-01

    This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.

  6. Empirical Histograms in Item Response Theory with Ordinal Data

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2007-01-01

    The purpose of this research is to describe, test, and illustrate a new implementation of the empirical histogram (EH) method for ordinal items. The EH method involves the estimation of item response model parameters simultaneously with the approximation of the distribution of the random latent variable (theta) as a histogram. Software for the EH…

  7. Threats and Aggression Directed at Soccer Referees: An Empirical Phenomenological Psychological Study

    ERIC Educational Resources Information Center

    Friman, Margareta; Nyberg, Claes; Norlander, Torsten

    2004-01-01

    A descriptive qualitative analysis of in-depth interviews involving seven provincial Soccer Association referees was carried out in order to find out how referees experience threats and aggression directed to soccer referees. The Empirical Phenomenological Psychological method (EPP-method) was used. The analysis resulted in thirty categories which…

  8. An Empirical Review of Research Methodologies and Methods in Creativity Studies (2003-2012)

    ERIC Educational Resources Information Center

    Long, Haiying

    2014-01-01

    Based on the data collected from 5 prestigious creativity journals, research methodologies and methods of 612 empirical studies on creativity, published between 2003 and 2012, were reviewed and compared to those in gifted education. Major findings included: (a) Creativity research was predominantly quantitative and psychometrics and experiment…

  9. Using Loss Functions for DIF Detection: An Empirical Bayes Approach.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Thayer, Dorothy; Lewis, Charles

    2000-01-01

    Studied a method for flagging differential item functioning (DIF) based on loss functions. Builds on earlier research that led to the development of an empirical Bayes enhancement to the Mantel-Haenszel DIF analysis. Tested the method through simulation and found its performance better than some commonly used DIF classification systems. (SLD)

  10. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    NASA Astrophysics Data System (ADS)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  11. An empirical method for computing leeside centerline heating on the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Helms, V. T., III

    1981-01-01

    An empirical method is presented for computing top centerline heating on the Space Shuttle Orbiter at simulated reentry conditions. It is shown that the Shuttle's top centerline can be thought of as being under the influence of a swept cylinder flow field. The effective geometry of the flow field, as well as top centerline heating, are directly related to oil-flow patterns on the upper surface of the fuselage. An empirical turbulent swept cylinder heating method was developed based on these considerations. The method takes into account the effects of the vortex-dominated leeside flow field without actually having to compute the detailed properties of such a complex flow. The heating method closely predicts experimental heat-transfer values on the top centerline of a Shuttle model at Mach numbers of 6 and 10 over a wide range in Reynolds number and angle of attack.

  12. Map LineUps: Effects of spatial structure on graphical inference.

    PubMed

    Beecham, Roger; Dykes, Jason; Meulemans, Wouter; Slingsby, Aidan; Turkay, Cagatay; Wood, Jo

    2017-01-01

    Fundamental to the effective use of visualization as an analytic and descriptive tool is the assurance that presenting data visually provides the capability of making inferences from what we see. This paper explores two related approaches to quantifying the confidence we may have in making visual inferences from mapped geospatial data. We adapt Wickham et al.'s 'Visual Line-up' method as a direct analogy with Null Hypothesis Significance Testing (NHST) and propose a new approach for generating more credible spatial null hypotheses. Rather than using as a spatial null hypothesis the unrealistic assumption of complete spatial randomness, we propose spatially autocorrelated simulations as alternative nulls. We conduct a set of crowdsourced experiments (n=361) to determine the just noticeable difference (JND) between pairs of choropleth maps of geographic units controlling for spatial autocorrelation (Moran's I statistic) and geometric configuration (variance in spatial unit area). Results indicate that people's abilities to perceive differences in spatial autocorrelation vary with baseline autocorrelation structure and the geometric configuration of geographic units. These results allow us, for the first time, to construct a visual equivalent of statistical power for geospatial data. Our JND results add to those provided in recent years by Klippel et al. (2011), Harrison et al. (2014) and Kay & Heer (2015) for correlation visualization. Importantly, they provide an empirical basis for an improved construction of visual line-ups for maps and the development of theory to inform geospatial tests of graphical inference.

  13. M101: Spectral Observations of H II Regions and Their Physical Properties

    NASA Astrophysics Data System (ADS)

    Hu, Ning; Wang, Enci; Lin, Zesen; Kong, Xu; Cheng, Fuzhen; Fan, Zou; Fang, Guangwen; Lin, Lin; Mao, Yewei; Wang, Jing; Zhou, Xu; Zhou, Zhiming; Zhu, Yinan; Zou, Hu

    2018-02-01

    By using the Hectospec 6.5 m Multiple Mirror Telescope and the 2.16 m telescope of the National Astronomical Observatories, of the Chinese Academy of Sciences, we obtained 188 high signal-to-noise ratio spectra of {{H}} {{II}} regions in the nearby galaxy M101, which is the largest spectroscopic sample of {{H}} {{II}} regions for this galaxy so far. These spectra cover a wide range of regions on M101, which enables us to analyze two-dimensional distributions of its physical properties. The physical parameters are derived from emission lines or stellar continua, including stellar population age, electron temperature, oxygen abundance, etc. The oxygen abundances are derived using two empirical methods based on O3N2 and R 23 indicators, as well as the direct {T}e method when [{{O}} {{III}}] λ 4363 is available. By applying the harmonic decomposition analysis to the velocity field, we obtained a line-of-sight rotation velocity of 71 {km} {{{s}}}-1 and a position angle of 36°. The stellar age profile shows an old stellar population in the galaxy center and a relatively young stellar population in outer regions, suggesting an old bulge and a young disk. The oxygen abundance profile exhibits a clear break at ∼18 kpc, with a gradient of ‑0.0364 dex kpc‑1 in the inner region and ‑0.00686 dex kpc‑1 in the outer region. Our results agree with the “inside-out” disk growth scenario of M101.

  14. Building on crossvalidation for increasing the quality of geostatistical modeling

    USGS Publications Warehouse

    Olea, R.A.

    2012-01-01

    The random function is a mathematical model commonly used in the assessment of uncertainty associated with a spatially correlated attribute that has been partially sampled. There are multiple algorithms for modeling such random functions, all sharing the requirement of specifying various parameters that have critical influence on the results. The importance of finding ways to compare the methods and setting parameters to obtain results that better model uncertainty has increased as these algorithms have grown in number and complexity. Crossvalidation has been used in spatial statistics, mostly in kriging, for the analysis of mean square errors. An appeal of this approach is its ability to work with the same empirical sample available for running the algorithms. This paper goes beyond checking estimates by formulating a function sensitive to conditional bias. Under ideal conditions, such function turns into a straight line, which can be used as a reference for preparing measures of performance. Applied to kriging, deviations from the ideal line provide sensitivity to the semivariogram lacking in crossvalidation of kriging errors and are more sensitive to conditional bias than analyses of errors. In terms of stochastic simulation, in addition to finding better parameters, the deviations allow comparison of the realizations resulting from the applications of different methods. Examples show improvements of about 30% in the deviations and approximately 10% in the square root of mean square errors between reasonable starting modelling and the solutions according to the new criteria. ?? 2011 US Government.

  15. Implementation and Use of a Crisis Hotline During the Treatment as Usual and Universal Screening Phases of a Suicide Intervention Study

    PubMed Central

    Arias, Sarah A.; Sullivan, Ashley F.; Miller, Ivan; Camargo, Carlos A.; Boudreaux, Edwin D.

    2015-01-01

    Background Although research suggests that crisis hotlines are an effective means of mitigating suicide risk, lack of empirical evidence may limit the use of this method as a research safety protocol. Purpose This study describes the use of a crisis hotline to provide clinical backup for research assessments. Methods Data were analyzed from participants in the Emergency Department Safety and Follow-up Evaluation (ED-SAFE) study (n=874). Socio-demographics, call completion data, and data available on suicide attempts occurring in relation to the crisis counseling call were analyzed. Pearson chi-squared statistic for differences in proportions were conducted to compare characteristics of patients receiving versus not receiving crisis counseling. P<0.05 was considered statistically significant. Results Overall, there were 163 counseling calls (6% of total assessment calls) from 135 (16%) of the enrolled subjects who were transferred to the crisis line because of suicide risk identified during the research assessment. For those transferred to the crisis line, the median age was 40 years (interquartile range 27–48) with 67% female, 80% white, and 11% Hispanic. Conclusions Increasing demand for suicide interventions in diverse healthcare settings warrants consideration of crisis hotlines as a safety protocol mechanism. Our findings provide background on how a crisis hotline was implemented as a safety measure, as well as the type of patients who may utilize this safety protocol. PMID:26341724

  16. 14NH_3 Line Positions and Intensities in the Far-Infrared Comparison of Ft-Ir Measurements to Empirical Hamiltonian Model Predictions

    NASA Astrophysics Data System (ADS)

    Sung, Keeyoon; Yu, Shanshan; Pearson, John; Pirali, Olivier; Kwabia Tchana, F.; Manceron, Laurent

    2016-06-01

    We have analyzed multiple spectra of high purity (99.5%) normal ammonia sample recorded at room temperatures using the FT-IR and AILES beamline at Synchrotron SOLEIL, France. More than 2830 line positions and intensities are measured for the inversion-rotation and rovibrational transitions in the 50 - 660 wn region. Quantum assignments were made for 2047 transitions from eight bands including four inversion-rotation bands (gs(a-s), νb{2}(a-s), 2νb{2}(a-s), and νb{4}(a-s)) and four ro-vibrational bands (νb{2} - gs, 2νb{2} - gs, νb{4} - νb{2}, and 2νb{2} -νb{4}), as well as covering more than 300 lines of ΔK = 3 forbidden transitions. Out of the eight bands, we note that 2νb{2} - νb{4} has not been listed in the HITRAN 2012 database. The measured line positions for the assigned transitions are in an excellent agreement (typically better than 0.001 wn) with the predictions from the empirical Hamiltonian model [S. Yu, J.C. Pearson, B.J. Drouin, et al.(2010)] in a wide range of J and K for all the eight bands. The comparison with the HITRAN 2012 database is also satisfactory, although systematic offsets are seen for transitions with high J and K and those from weak bands. However, differences of 20% or so are seen in line intensities for allowed transitions between the measurements and the model predictions, depending on the bands. We have also noticed that most of the intensity outliers in the Hamiltonian model predictions belong to transitions from gs(a-s) band. We present the final results of the FT-IR measurements of line positions and intensities, and their comparisons to the model predictions and the HITRAN 2012 database. Research described in this paper was performed at the Jet Propulsion Laboratory and California Institute of Technology, under contracts and cooperative agreements with the National Aeronautics and Space Administration.

  17. Empirical Approaches to the Birthday Problem

    ERIC Educational Resources Information Center

    Flores, Alfinio; Cauto, Kevin M.

    2012-01-01

    This article will describe two activities in which students conduct experiments with random numbers so they can see that having at least one repeated birthday in a group of 40 is not unusual. The first empirical approach was conducted by author Cauto in a secondary school methods course. The second empirical approach was used by author Flores with…

  18. Stark widths and shifts for spectral lines of Sn IV

    NASA Astrophysics Data System (ADS)

    de Andrés-García, I.; Alonso-Medina, A.; Colón, C.

    2016-01-01

    In this paper, we present theoretical Stark widths and shifts calculated corresponding to 66 spectral lines of Sn IV. We use the Griem semi-empirical approach and the COWAN computer code. For the intermediate coupling calculations, the standard method of least-squares fitting from experimental energy levels was used. Data are presented for an electron density of 1017 cm-3 and temperatures T = 1.1-5.0 (104 K). The matrix elements used in these calculations have been determined from 34 configurations of Sn IV: 4d10ns(n = 5-10), 4d10nd(n = 5-8), 4d95s2, 4d95p2, 4d95s5d, 4d85s5p2 and 4d105g for even parity and 4d10np(n = 5-8), 4d10nf (n = 4-6), 4d95snp(n = 5-8), 4d85s25p and 4d95snf (n = 4-10) for odd parity. Also, in order to test the matrix elements used in our calculations, we present calculated values of radiative lifetimes of 14 levels of Sn IV. There is good agreement between our calculations and the experimental radiative lifetimes obtained from the bibliography. The spectral lines of Sn IV are observed in UV spectra of HD 149499 B obtained with the Far Ultraviolet Spectroscopic Explorer, the Goddard High Resolution Spectrograph and the International Ultraviolet Explorer. Theoretical trends of the Stark broadening parameter versus the temperature for relevant lines are presented. Also our values of Stark broadening parameters have been compared with the data available in the bibliography.

  19. Biosciences within the pre-registration (pre-requisite) curriculum: an integrative literature review of curriculum interventions 1990-2012.

    PubMed

    McVicar, Andrew; Andrew, Sharon; Kemble, Ross

    2014-04-01

    The learning of biosciences is well-documented to be problematic as students find the subjects amongst the most difficult and anxiety-provoking of their pre-registration programme. Studies suggest that learning consequently is not at the level anticipated by the profession. Curriculum innovations might improve the situation but the effectiveness of applied interventions has not been evaluated. To undertake an integrative review and narrative synthesis of curriculum interventions and evaluate their effect on the learning of biosciences by pre-registration student nurses. Review methods A systematic search of electronic databases CINAHL, Medline, British Nursing Index and Google Scholar for empirical research studies was designed to evaluate the introduction of a curriculum intervention related to the biosciences, published in 1990-2012. Studies were evaluated for design, receptivity of the intervention and impact on bioscience learning. The search generated fourteen papers that met inclusion criteria. Seven studies introduced on-line learning packages, five an active learning format into classroom teaching or practical sessions, and two applied Audience Response Technology as an exercise in self-testing and reflection. Almost all studies reported a high level of student satisfaction, though in some there were access/utilization issues for students using on-line learning. Self-reporting suggested positive experiences, but objective evaluation suggests that impacts on learning were variable and unconvincing even where an effect on course progress was identified. Adjunct on-line programmes also show promise for supporting basic science or language acquisition. Published studies of curriculum interventions, including on-line support, have focused too heavily on the perceived benefit to students rather than objective measures of impact on actual learning. Future studies should include rigorous assessment evaluations within their design if interventions are to be adopted to reduce the 'bioscience problem'. © 2013.

  20. Analyzing the Multiscale Processes in Tropical Cyclone Genesis Associated with African Easterly Waves using the PEEMD. Part I: Downscaling Processes

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Shen, B. W.; Cheung, S.

    2016-12-01

    Recent advance in high-resolution global hurricane simulations and visualizations have collectively suggested the importance of both downscaling and upscaling processes in the formation and intensification of TCs. To reveal multiscale processes from massive volume of global data for multiple years, a scalable Parallel Ensemble Empirical Mode Decomposition (PEEMD) method has been developed for the analysis. In this study, the PEEMD is applied to analyzing 10-year (2004-2013) ERA-Interim global 0.750 resolution reanalysis data to explore the role of the downscaling processes in tropical cyclogenesis associated with African Easterly Waves (AEWs). Using the PEEMD, raw data are decomposed into oscillatory Intrinsic Function Modes (IMFs) that represent atmospheric systems of the various length scales and the trend mode that represents a non-oscillatory large scale environmental flow. Among oscillatory modes, results suggest that the third oscillatory mode (IMF3) is statistically correlated with the TC/AEW scale systems. Therefore, IMF3 and trend mode are analyzed in details. Our 10-year analysis shows that more than 50% of the AEW associated hurricanes reveal the association of storms' formation with the significant downscaling shear transfer from the larger-scale trend mode to the smaller scale IMF3. Future work will apply the PEEMD to the analysis of higher-resolution datasets to explore the role of the upscaling processes provided by the convection (or TC) in the development of the TC (or AEW). Figure caption: The tendency for horizontal wind shear for the total winds (black line), IMF3 (blue line), and trend mode (red line) and SLP (black dotted line) along the storm track of Helene (2006).

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendigutía, I.; Brittain, S.; Eiroa, C.

    This work presents X-Shooter/Very Large Telescope spectra of the prototypical, isolated Herbig Ae stars HD 31648 (MWC 480) and HD 163296 over five epochs separated by timescales ranging from days to months. Each spectrum spans over a wide wavelength range covering from 310 to 2475 nm. We have monitored the continuum excess in the Balmer region of the spectra and the luminosity of 12 ultraviolet, optical, and near-infrared spectral lines that are commonly used as accretion tracers for T Tauri stars. The observed strengths of the Balmer excesses have been reproduced from a magnetospheric accretion shock model, providing a meanmore » mass accretion rate of 1.11 × 10{sup –7} and 4.50 × 10{sup –7} M{sub ☉} yr{sup –1} for HD 31648 and HD 163296, respectively. Accretion rate variations are observed, being more pronounced for HD 31648 (up to 0.5 dex). However, from the comparison with previous results it is found that the accretion rate of HD 163296 has increased by more than 1 dex, on a timescale of ∼15 yr. Averaged accretion luminosities derived from the Balmer excess are consistent with the ones inferred from the empirical calibrations with the emission line luminosities, indicating that those can be extrapolated to HAe stars. In spite of that, the accretion rate variations do not generally coincide with those estimated from the line luminosities, suggesting that the empirical calibrations are not useful to accurately quantify accretion rate variability.« less

  2. THE ZEEMAN EFFECT IN THE 44 GHZ CLASS I METHANOL MASER LINE TOWARD DR21(OH)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Momjian, E.; Sarma, A. P., E-mail: emomjian@nrao.edu, E-mail: asarma@depaul.edu

    2017-01-10

    We report detection of the Zeeman effect in the 44 GHz Class I methanol maser line, toward the star-forming region DR21(OH). In a 219 Jy beam{sup −1} maser centered at an LSR velocity of 0.83 km s{sup −1}, we find a 20- σ detection of zB {sub los} = 53.5 ± 2.7 Hz. If 44 GHz methanol masers are excited at n ∼ 10{sup 7–8} cm{sup −3}, then the B versus n {sup 1/2} relation would imply, from comparison with Zeeman effect detections in the CN(1 − 0) line toward DR21(OH), that magnetic fields traced by 44 GHz methanol masersmore » in DR21(OH) should be ∼10 mG. Combined with our detected zB {sub los} = 53.5 Hz, this would imply that the value of the 44 GHz methanol Zeeman splitting factor z is ∼5 Hz mG{sup −1}. Such small values of z would not be a surprise, as the methanol molecule is non-paramagnetic, like H{sub 2}O. Empirical attempts to determine z , as demonstrated, are important because there currently are no laboratory measurements or theoretically calculated values of z for the 44 GHz CH{sub 3}OH transition. Data from observations of a larger number of sources are needed to make such empirical determinations robust.« less

  3. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    NASA Astrophysics Data System (ADS)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.


  4. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  5. Cycling Empirical Antibiotic Therapy in Hospitals: Meta-Analysis and Models

    PubMed Central

    Abel, Sören; Viechtbauer, Wolfgang; Bonhoeffer, Sebastian

    2014-01-01

    The rise of resistance together with the shortage of new broad-spectrum antibiotics underlines the urgency of optimizing the use of available drugs to minimize disease burden. Theoretical studies suggest that coordinating empirical usage of antibiotics in a hospital ward can contain the spread of resistance. However, theoretical and clinical studies came to different conclusions regarding the usefulness of rotating first-line therapy (cycling). Here, we performed a quantitative pathogen-specific meta-analysis of clinical studies comparing cycling to standard practice. We searched PubMed and Google Scholar and identified 46 clinical studies addressing the effect of cycling on nosocomial infections, of which 11 met our selection criteria. We employed a method for multivariate meta-analysis using incidence rates as endpoints and find that cycling reduced the incidence rate/1000 patient days of both total infections by 4.95 [9.43–0.48] and resistant infections by 7.2 [14.00–0.44]. This positive effect was observed in most pathogens despite a large variance between individual species. Our findings remain robust in uni- and multivariate metaregressions. We used theoretical models that reflect various infections and hospital settings to compare cycling to random assignment to different drugs (mixing). We make the realistic assumption that therapy is changed when first line treatment is ineffective, which we call “adjustable cycling/mixing”. In concordance with earlier theoretical studies, we find that in strict regimens, cycling is detrimental. However, in adjustable regimens single resistance is suppressed and cycling is successful in most settings. Both a meta-regression and our theoretical model indicate that “adjustable cycling” is especially useful to suppress emergence of multiple resistance. While our model predicts that cycling periods of one month perform well, we expect that too long cycling periods are detrimental. Our results suggest that “adjustable cycling” suppresses multiple resistance and warrants further investigations that allow comparing various diseases and hospital settings. PMID:24968123

  6. The ionized gas at the centre of IC 10: a possible localized chemical pollution by Wolf-Rayet stars

    NASA Astrophysics Data System (ADS)

    López-Sánchez, Á. R.; Mesa-Delgado, A.; López-Martín, L.; Esteban, C.

    2011-03-01

    We present results from integral field spectroscopy with the Potsdam Multi-Aperture Spectrograph at the 3.5-m telescope at Calar Alto Observatory of the intense star-forming region [HL90] 111 at the centre of the starburst galaxy IC 10. We have obtained maps with a spatial sampling of 1 × 1 arcsec2= 3.9× 3.9 pc2 of different emission lines and analysed the extinction, physical conditions, nature of the ionization and chemical abundances of the ionized gas, as well determined locally the age of the most recent star formation event. By defining several apertures, we study the main integrated properties of some regions within [HL90] 111. Two contiguous spaxels show an unambiguous detection of the broad He IIλ4686 emission line, this feature seems to be produced by a single late-type WN star. We also report a probable N and He enrichment in the precise spaxels where the Wolf-Rayet (WR) features are detected. The enrichment pattern is roughly consistent with that expected for the pollution of the ejecta of a single or a very small number of WR stars. Furthermore, this chemical pollution is very localized (˜2 arcsec ˜7.8 pc) and it should be difficult to detect in star-forming galaxies beyond the Local Volume. We also discuss the use of the most common empirical calibrations to estimate the oxygen abundances of the ionized gas in nearby galaxies from 2D spectroscopic data. The ionization degree of the gas plays an important role when applying these empirical methods, as they tend to give lower oxygen abundances with increasing ionization degree. Based on observations collected at the Centro Astrónomico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Plank Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC).Visiting Astronomer at the Instituto de Astrofísica de Canarias.

  7. VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data

    PubMed Central

    Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel

    2014-01-01

    This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198

  8. Brief report: Factor structure of parenting behaviour in early adolescence.

    PubMed

    Spithoven, Annette W M; Bijttebier, Patricia; Van Leeuwen, Karla; Goossens, Luc

    2016-12-01

    Researchers have traditionally relied on a tripartite model of parenting behaviour, consisting of the dimensions parental support, psychological control, and behavioural control. However, some scholars have argued to distinguish two dimensions of behavioural control, namely reactive control and proactive control. In line with earlier work, the current study found empirical evidence for these distinct behavioural control dimensions. In addition, the study showed that the four parenting dimensions of parental support, psychological control, reactive control, and proactive control were differentially related to peer-related loneliness as well as parent-related loneliness. Thereby, the current study does not only provide empirical evidence for the distinction between various parenting dimensions, but also shows the utility of this differentiation. Copyright © 2016. Published by Elsevier Ltd.

  9. An Empirical Model of the Variation of the Solar Lyman-α Spectral Irradiance

    NASA Astrophysics Data System (ADS)

    Kretzschmar, Matthieu; Snow, Martin; Curdt, Werner

    2018-03-01

    We propose a simple model that computes the spectral profile of the solar irradiance in the hydrogen Lyman alpha line, H Ly-α (121.567 nm), from 1947 to present. Such a model is relevant for the study of many astronomical environments, from planetary atmospheres to interplanetary medium. This empirical model is based on the SOlar Heliospheric Observatory/Solar Ultraviolet Measurement of Emitted Radiation observations of the Ly-α irradiance over solar cycle 23 and the Ly-α disk-integrated irradiance composite. The model reproduces the temporal variability of the spectral profile and matches the independent SOlar Radiation and Climate Experiment/SOLar-STellar Irradiance Comparison Experiment spectral observations from 2003 to 2007 with an accuracy better than 10%.

  10. Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions

    NASA Astrophysics Data System (ADS)

    Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.

    2002-02-01

    Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.

  11. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  12. Self-help on-line: an outcome evaluation of breast cancer bulletin boards.

    PubMed

    Lieberman, Morton A; Goldstein, Benjamin A

    2005-11-01

    Many breast cancer patients find help from on-line self-help groups, consisting of self-directed, asynchronous, bulletin boards. These have yet to be empirically evaluated. Upon joining a group and 6 months later, new members (N=114) to breast cancer bulletin boards completed measures of depression (CES-D), growth (PTGI) and psychosocial wellbeing (FACT-B). Improvement was statistically significant on all three measures. This serves as a first validation of Internet bulletin boards as a source of support and help for breast cancer patients. These boards are of particular interest because they are free, accessible and support comes from peers and not from professional facilitators.

  13. Piezoelectric line moment actuator for active radiation control from light-weight structures

    NASA Astrophysics Data System (ADS)

    Jandak, Vojtech; Svec, Petr; Jiricek, Ondrej; Brothanek, Marek

    2017-11-01

    This article outlines the design of a piezoelectric line moment actuator used for active structural acoustic control. Actuators produce a dynamic bending moment that appears in the controlled structure resulting from the inertial forces when the attached piezoelectric stripe actuators start to oscillate. The article provides a detailed theoretical analysis necessary for the practical realization of these actuators, including considerations concerning their placement, a crucial factor in the overall system performance. Approximate formulas describing the dependency of the moment amplitude on the frequency and the required electric voltage are derived. Recommendations applicable for the system's design based on both theoretical and empirical results are provided.

  14. Students' Ability to Connect Function Properties to Different Types of Elementary Functions: An Empirical Study on the Role of External Representations

    ERIC Educational Resources Information Center

    De Bock, Dirk; Neyens, Deborah; Van Dooren, Wim

    2017-01-01

    Recent research on the phenomenon of improper proportional reasoning focused on students' understanding of elementary functions and their external representations. So far, the role of basic function properties in students' concept images of functions remained unclear. We add to this research line by investigating how accurate students are in…

  15. An Empirical Analysis of Social Capital and Economic Growth in Europe (1980-2000)

    ERIC Educational Resources Information Center

    Neira, Isabel; Vazquez, Emilia; Portela, Marta

    2009-01-01

    It is of paramount concern for economists to uncover the factors that determine economic growth and social development. In recent years a new field of investigation has come to the fore in which social capital is analysed in order to determine its effect on economic growth. Along these lines the work presented here examines the relationships that…

  16. Teachers' Interests in Geography Topics and Regions--How They Differ from Students' Interests? Empirical Findings

    ERIC Educational Resources Information Center

    Hemmer, Ingrid; Hemmer, Michael

    2017-01-01

    Teachers' interest is a key influencing factor in geography class, development of curricula and writing textbooks and only little is known about it. A cross-sectional study along the lines of interest theory originating from educational psychology was carried out in Germany in the summer of 2015, in which 141 teachers at secondary schools…

  17. The Educational Value of Visual Cues and 3D-Representational Format in a Computer Animation under Restricted and Realistic Conditions

    ERIC Educational Resources Information Center

    Huk, Thomas; Steinke, Mattias; Floto, Christian

    2010-01-01

    Within the framework of cognitive learning theories, instructional design manipulations have primarily been investigated under tightly controlled laboratory conditions. We carried out two experiments, where the first experiment was conducted in a restricted system-paced setting and is therefore in line with the majority of empirical studies in the…

  18. New Multiple-Choice Measures of Historical Thinking: An Investigation of Cognitive Validity

    ERIC Educational Resources Information Center

    Smith, Mark D.

    2018-01-01

    History education scholars have recognized the need for test validity research in recent years and have called for empirical studies that explore how to best measure historical thinking processes. The present study was designed to help answer this call and to provide a model that others can adapt to carry this line of research forward. It employed…

  19. Exploring the Role of Digital Data in Contemporary Schools and Schooling--"200,000 Lines in an Excel Spreadsheet"

    ERIC Educational Resources Information Center

    Selwyn, Neil; Henderson, Michael; Chao, Shu-Hua

    2015-01-01

    The generation, processing and circulation of data in digital form is now an integral aspect of contemporary schooling. Based upon empirical study of two secondary school settings in Australia, this paper considers the different forms of digitally-based "data work" engaged in by school leaders, managers, administrators and teachers. In…

  20. Dynamic Assessment and Response to Intervention: Two Sides of One Coin

    ERIC Educational Resources Information Center

    Grigorenko, Elena L.

    2009-01-01

    This article compares and contrasts the main features of dynamic testing and assessment (DT/A) and response to intervention (RTI). The comparison is carried out along the following lines: (a) historical and empirical roots of both concepts, (b) premises underlying DT/A and RTI, (c) terms used in these concepts, (d) use of these concepts, (e)…

  1. An improved strategy for regression of biophysical variables and Landsat ETM+ data.

    Treesearch

    Warren B. Cohen; Thomas K. Maiersperger; Stith T. Gower; David P. Turner

    2003-01-01

    Empirical models are important tools for relating field-measured biophysical variables to remote sensing data. Regression analysis has been a popular empirical method of linking these two types of data to provide continuous estimates for variables such as biomass, percent woody canopy cover, and leaf area index (LAI). Traditional methods of regression are not...

  2. Trends in Research Methods in Applied Linguistics: China and the West.

    ERIC Educational Resources Information Center

    Yihong, Gao; Lichun, Li; Jun, Lu

    2001-01-01

    Examines and compares current trends in applied linguistics (AL) research methods in China and the West. Reviews AL articles in four Chinese journals, from 1978-1997, and four English journals from 1985 to 1997. Articles are categorized and subcategorized. Results show that in China, AL research is heading from non-empirical toward empirical, with…

  3. On the signal-to-noise ratio in IUE high-dispersion spectra

    NASA Technical Reports Server (NTRS)

    Leckrone, David S.; Adelman, Saul J.

    1989-01-01

    An observational and data reduction technique for fixed pattern noise (FPN) and random noise (RN) in fully extracted IUE high-dispersion spectra is described in detail, along with actual empirical values of signal-to-noise ratio (S/N) achieved. A co-addition procedure, involving SWP and LWR cameras observations of the same spectrum at different positions in the image format, provides a basis to disentangle FPN from RN, allowing each average amplitude, within a given wavelength interval, to be estimated as a function of average flux number. Empirical curves, derived with the noise algorithm, make it possible to estimate the S/N in individual spectra at the wavelengths investigated. The average S/N at the continuum level in well-exposed stellar spectra varies from 10 to 20, for the orders analyzed, depending on position in the spectral format. The co-addition procedure yields an improvement in S/N by factors ranging from 2.3 to 2.9. Direct measurements of S/N in narrow, line-free wavelength intervals of individual and co-added spectra for weak-lined stars yield comparable, or in some cases somewhat higher, S/N values and improvement factors.

  4. Beyond integrating social sciences: Reflecting on the place of life sciences in empirical bioethics methodologies.

    PubMed

    Mertz, Marcel; Schildmann, Jan

    2018-06-01

    Empirical bioethics is commonly understood as integrating empirical research with normative-ethical research in order to address an ethical issue. Methodological analyses in empirical bioethics mainly focus on the integration of socio-empirical sciences (e.g. sociology or psychology) and normative ethics. But while there are numerous multidisciplinary research projects combining life sciences and normative ethics, there is few explicit methodological reflection on how to integrate both fields, or about the goals and rationales of such interdisciplinary cooperation. In this paper we will review some drivers for the tendency of empirical bioethics methodologies to focus on the collaboration of normative ethics with particularly social sciences. Subsequently, we argue that the ends of empirical bioethics, not the empirical methods, are decisive for the question of which empirical disciplines can contribute to empirical bioethics in a meaningful way. Using already existing types of research integration as a springboard, five possible types of research which encompass life sciences and normative analysis will illustrate how such cooperation can be conceptualized from a methodological perspective within empirical bioethics. We will conclude with a reflection on the limitations and challenges of empirical bioethics research that integrates life sciences.

  5. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Towards a model of surgeons' leadership in the operating room.

    PubMed

    Henrickson Parker, Sarah; Yule, Steven; Flin, Rhona; McKinley, Aileen

    2011-07-01

    There is widespread recognition that leadership skills are essential for effective performance in the workplace, but the evidence detailing effective leadership behaviours for surgeons during operations is unclear. Boolean searches of four on-line databases and detailed hand search of relevant references were conducted. A four stage screening process was adopted stipulating that articles presented empirical data on surgeons' intraoperative leadership behaviours. Ten relevant articles were identified and organised by method of investigation into (i) observation, (ii) questionnaire and (iii) interview studies. This review summarises the limited literature on surgeons' intraoperative leadership, and proposes a preliminary theoretically based structure for intraoperative leadership behaviours. This structure comprises seven categories with corresponding leadership components and covers two overarching themes related to task- and team-focus. Selected leadership theories which may be applicable to the operating room environment are also discussed. Further research is required to determine effective intraoperative leadership behaviours for safe surgical practice.

  7. Key future research questions on mediators and moderators of behaviour change processes for substance abuse.

    PubMed

    Rehm, Jürgen

    2008-06-01

    In summarizing the key themes and results of the second meeting of the German Addiction Research Network 'Understanding Addiction: Mediators and Moderators of Behaviour Change Process', the following concrete steps forward were laid out to improve knowledge. The steps included pleas to (1) redefine substance abuse disorders, especially redefine the concept of abuse and harmful use; (2) increase the use of longitudinal and life-course studies with more adequate statistical methods such as latent growth modelling; (3) empirically test more specific and theoretically derived common factors and mechanisms of behavioural change processes; (4) better exploit cross-regional and cross-cultural differences.Funding agencies are urged to support these developments by specifically supporting interdisciplinary research along the lines specified above. This may include improved forms of international funding of groups of researchers from different countries, where each national group conducts a specific part of an integrated proposal. 2008 John Wiley & Sons, Ltd

  8. Quantitative Structure-Cytotoxicity Relationship of Bioactive Heterocycles by the Semi-empirical Molecular Orbital Method with the Concept of Absolute Hardness

    NASA Astrophysics Data System (ADS)

    Ishihara, Mariko; Sakagami, Hiroshi; Kawase, Masami; Motohashi, Noboru

    The relationship between the cytotoxicity of N-heterocycles (13 4-trifluoromethylimidazole, 15 phenoxazine and 12 5-trifluoromethyloxazole derivatives), O-heterocycles (11 3-formylchromone and 20 coumarin derivatives) and seven vitamin K2 derivatives against eight tumor cell lines (HSC-2, HSC-3, HSC-4, T98G, HSG, HepG2, HL-60, MT-4) and a maximum of 15 chemical descriptors was investigated using CAChe Worksystem 4.9 project reader. After determination of the conformation of these compounds and approximation to the molecular form present in vivo (biomimetic) by CONFLEX5, the most stable structure was determined by CAChe Worksystem 4.9 MOPAC (PM3). The present study demonstrates the best relationship between the cytotoxic activity and molecular shape or molecular weight of these compounds. Their biological activities can be estimated by hardness and softness, and by using η-χ activity diagrams.

  9. Learning from examples - Generation and evaluation of decision trees for software resource analysis

    NASA Technical Reports Server (NTRS)

    Selby, Richard W.; Porter, Adam A.

    1988-01-01

    A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.

  10. Extension of the energy range of the experimental activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium up to 65MeV.

    PubMed

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-04-01

    Activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium were extended up to 65MeV by using stacked foil irradiation and gamma spectrometry experimental methods. Experimental cross-sections data for the formation of the radionuclides (159)Dy, (157)Dy, (155)Dy, (161)Tb, (160)Tb, (156)Tb, (155)Tb, (154m2)Tb, (154m1)Tb, (154g)Tb, (153)Tb, (152)Tb and (151)Tb are reported in the 36-65MeV energy range, and compared with an old dataset from 1964. The experimental data were also compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS nuclear reaction model code as listed in the latest on-line libraries TENDL 2013. Copyright © 2015. Published by Elsevier Ltd.

  11. Winds from stripped low-mass helium stars and Wolf-Rayet stars

    NASA Astrophysics Data System (ADS)

    Vink, Jorick S.

    2017-11-01

    We present mass-loss predictions from Monte Carlo radiative transfer models for helium (He) stars as a function of stellar mass, down to 2 M⊙. Our study includes both massive Wolf-Rayet (WR) stars and low-mass He stars that have lost their envelope through interaction with a companion. For these low-mass He stars we predict mass-loss rates that are an order of magnitude smaller than by extrapolation of empirical WR mass-loss rates. Our lower mass-loss rates make it harder for these elusive stripped stars to be discovered via line emission, and we should attempt to find these stars through alternative methods instead. Moreover, lower mass-loss rates make it less likely that low-mass He stars provide stripped-envelope supernovae (SNe) of type Ibc. We express our mass-loss predictions as a function of L and Z and not as a function of the He abundance, as we do not consider this physically astute given our earlier work. The exponent of the M⊙ versus Z dependence is found to be 0.61, which is less steep than relationships derived from recent empirical atmospheric modelling. Our shallower exponent will make it more challenging to produce "heavy" black holes of order 40 M⊙, as recently discovered in the gravitational wave event GW 150914, making low metallicity for these types of events even more necessary.

  12. Principles and application of LIMS in mouse clinics.

    PubMed

    Maier, Holger; Schütt, Christine; Steinkamp, Ralph; Hurt, Anja; Schneltzer, Elida; Gormanns, Philipp; Lengger, Christoph; Griffiths, Mark; Melvin, David; Agrawal, Neha; Alcantara, Rafael; Evans, Arthur; Gannon, David; Holroyd, Simon; Kipp, Christian; Raj, Navis Pretheeba; Richardson, David; LeBlanc, Sophie; Vasseur, Laurent; Masuya, Hiroshi; Kobayashi, Kimio; Suzuki, Tomohiro; Tanaka, Nobuhiko; Wakana, Shigeharu; Walling, Alison; Clary, David; Gallegos, Juan; Fuchs, Helmut; de Angelis, Martin Hrabě; Gailus-Durner, Valerie

    2015-10-01

    Large-scale systemic mouse phenotyping, as performed by mouse clinics for more than a decade, requires thousands of mice from a multitude of different mutant lines to be bred, individually tracked and subjected to phenotyping procedures according to a standardised schedule. All these efforts are typically organised in overlapping projects, running in parallel. In terms of logistics, data capture, data analysis, result visualisation and reporting, new challenges have emerged from such projects. These challenges could hardly be met with traditional methods such as pen & paper colony management, spreadsheet-based data management and manual data analysis. Hence, different Laboratory Information Management Systems (LIMS) have been developed in mouse clinics to facilitate or even enable mouse and data management in the described order of magnitude. This review shows that general principles of LIMS can be empirically deduced from LIMS used by different mouse clinics, although these have evolved differently. Supported by LIMS descriptions and lessons learned from seven mouse clinics, this review also shows that the unique LIMS environment in a particular facility strongly influences strategic LIMS decisions and LIMS development. As a major conclusion, this review states that there is no universal LIMS for the mouse research domain that fits all requirements. Still, empirically deduced general LIMS principles can serve as a master decision support template, which is provided as a hands-on tool for mouse research facilities looking for a LIMS.

  13. The NIR Ca ii triplet at low metallicity. Searching for extremely low-metallicity stars in classical dwarf galaxies

    NASA Astrophysics Data System (ADS)

    Starkenburg, E.; Hill, V.; Tolstoy, E.; González Hernández, J. I.; Irwin, M.; Helmi, A.; Battaglia, G.; Jablonka, P.; Tafelmeyer, M.; Shetrone, M.; Venn, K.; de Boer, T.

    2010-04-01

    The NIR Ca ii triplet absorption lines have proven to be an important tool for quantitative spectroscopy of individual red giant branch stars in the Local Group, providing a better understanding of metallicities of stars in the Milky Way and dwarf galaxies and thereby an opportunity to constrain their chemical evolution processes. An interesting puzzle in this field is the significant lack of extremely metal-poor stars, below [Fe/H] = -3, found in classical dwarf galaxies around the Milky Way using this technique. The question arises whether these stars are really absent, or if the empirical Ca ii triplet method used to study these systems is biased in the low-metallicity regime. Here we present results of synthetic spectral analysis of the Ca ii triplet, that is focused on a better understanding of spectroscopic measurements of low-metallicity giant stars. Our results start to deviate strongly from the widely-used and linear empirical calibrations at [Fe/H] < -2. We provide a new calibration for Ca ii triplet studies which is valid for -0.5 ≥ [Fe/H] ≥ -4. We subsequently apply this new calibration to current data sets and suggest that the classical dwarf galaxies are not so devoid of extremely low-metallicity stars as was previously thought. Using observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile proposal 171.B-0588.

  14. Temperature dependence of Er³⁺ ionoluminescence and photoluminescence in Gd₂O₃:Bi nanopowder.

    PubMed

    Boruc, Zuzanna; Gawlik, Grzegorz; Fetliński, Bartosz; Kaczkan, Marcin; Malinowski, Michał

    2014-06-01

    Ionoluminescence (IL) and photoluminescence (PL) of trivalent erbium ions (Er(3+)) in Gd2O3 nanopowder host activated with Bi(3+) ions has been studied in order to establish the link between changes in luminescent spectra and temperature of the sample material. IL measurements have been performed with H2 (+) 100 keV ion beam bombarding the target material for a few seconds, while PL spectra have been collected for temperatures ranging from 20 °C to 700 °C. The PL data was used as a reference in determining the temperature corresponding to IL spectra. The collected data enabled the definition of empirical formula based on the Boltzmann distribution, which allows the temperature to be determined with a maximum sensitivity of 9.7 × 10(-3) °C(-1). The analysis of the Er(3+) energy level structure in terms of tendency of the system to stay in thermal equilibrium, explained different behaviors of the line intensities. This work led to the conclusion that temperature changes during ion excitation can be easily defined with separately collected PL spectra. The final result, which is empirical formula describing dependence of fluorescence intensity ratio on temperature, raises the idea of an application of method in temperature control, during processes like ion implantation and some nuclear applications.

  15. Self-absorption characteristics of measured laser-induced plasma line shapes

    NASA Astrophysics Data System (ADS)

    Parigger, C. G.; Surmick, D. M.; Gautam, G.

    2017-02-01

    The determination of electron density and temperature is reported from line-of-sight measurements of laser-induced plasma. Experiments are conducted in standard ambient temperature and pressure air and in a cell containing ultra-high-pure hydrogen slightly above atmospheric pressure. Spectra of the hydrogen Balmer series lines can be measured in laboratory air due to residual moisture following optical breakdown generated with 13 to 14 nanosecond, pulsed Nd:YAG laser radiation. Comparisons with spectra obtained in hydrogen gas yields Abel-inverted line shape appearances that indicate occurrence of self-absorption. The electron density and temperature distributions along the line of sight show near-spherical rings, expanding at or near the speed of sound in the hydrogen gas experiments. The temperatures in the hydrogen studies are obtained using Balmer series alpha, beta, gamma profiles. Over and above the application of empirical formulae to derive the electron density from hydrogen alpha width and shift, and from hydrogen beta width and peak-separation, so-called escape factors and the use of a doubling mirror are discussed.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fortman, Sarah M.; Neese, Christopher F.; De Lucia, Frank C.

    The results of an experimental approach to the identification and characterization of the astrophysical weed vinyl cyanide in the 210-270 GHz region are reported. This approach is based on spectrally complete, intensity-calibrated spectra taken at more than 400 different temperatures in the 210-270 GHz region and is used to produce catalogs in the usual astrophysical format: line frequency, line strength, and lower state energy. As in our earlier study of ethyl cyanide, we also include the results of a frequency point-by-point analysis, which is especially well suited for characterizing weak lines and blended lines in crowded spectra. This study showsmore » substantial incompleteness in the quantum-mechanical (QM) models used to calculate astrophysical catalogs, primarily due to their omission of many low-lying vibrational states of vinyl cyanide, but also due to the exclusion of perturbed rotational transitions. Unlike ethyl cyanide, the QM catalogs for vinyl cyanide include analyses of perturbed excited vibrational states, whose modeling is more challenging. Accordingly, we include an empirical study of the frequency accuracy of these QM models. We observe modest frequency differences for some vibrationally excited lines.« less

  17. Line overlap and self-shielding of molecular hydrogen in galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.; Draine, Bruce T., E-mail: gnedin@fnal.gov, E-mail: andrey@oddjob.uchicago.edu, E-mail: draine@astro.princeton.edu

    2014-11-01

    The effect of line overlap in the Lyman and Werner bands, often ignored in galactic studies of the atomic-to-molecular transition, greatly enhances molecular hydrogen self-shielding in low metallicity environments and dominates over dust shielding for metallicities below about 10% solar. We implement that effect in cosmological hydrodynamics simulations with an empirical model, calibrated against the observational data, and provide fitting formulae for the molecular hydrogen fraction as a function of gas density on various spatial scales and in environments with varied dust abundance and interstellar radiation field. We find that line overlap, while important for detailed radiative transfer in themore » Lyman and Werner bands, has only a minor effect on star formation on galactic scales, which, to a much larger degree, is regulated by stellar feedback.« less

  18. Stationary multifaceted asymmetric radiation from the edge and improved confinement mode in a superconducting tokamak.

    PubMed

    Gao, X; Xie, J K; Wan, Y X; Ushigusa, K; Wan, B N; Zhang, S Y; Li, J; Kuang, G L

    2002-01-01

    Stationary multifaceted asymmetric radiation from the edge (MARFE) is studied by gas-puffing feedback control according to an empirical MARFE critical density ( approximately 1.8 x 10(13) cm(-3)) in the HT-7 Ohmic discharges (where the plasma current I(p) is about 170 kA, loop voltage V(loop)=2-3 V, toroidal field B(T)=1.9 T, and Z(eff)=3-4). It is observed that an improved confinement mode characterized by D(alpha) line emissions drops and the line-averaged density increase is triggered in the stationary MARFE discharges. The mode is not a symmetric "detachment" state, because the quasi-steady-state poloidally asymmetric radiation (e.g., C III line emissions) still exists. This phenomenon has not been predicted by the current MARFE theory.

  19. HITRAN2016 Database Part II: Overview of the Spectroscopic Parameters of the Trace Gases

    NASA Astrophysics Data System (ADS)

    Tan, Yan; Gordon, Iouli E.; Rothman, Laurence S.; Kochanov, Roman V.; Hill, Christian

    2017-06-01

    The 2016 edition of HITRAN database is available now. This new edition of the database takes advantage of the new structure and can be accessed through HITRANonline (www.hitran.org). The line-by-line lists for almost all of the trace atmospheric species were updated in comparison with the previous edition HITRAN2012. These extended update covers not only updating few transitions of the certain molecules, but also complete replacements of the whole line lists, and as well as introduction of new spectroscopic parameters for non-Voigt line shape. The new line lists for NH_3, HNO_3, OCS, HCN, CH_3Cl, C_2H_2, C_2H_6, PH_3, C_2H_4, CH_3CN, CF_4, C_4H_2, and SO_3 feature substantial expansion of the spectral and dynamic ranges in addition of the improved accuracy of the parameters for already existing lines. A semi-empirical procedure was developed to update the air-broadening and self-broadening coefficients of N_2O, SO_2, NH_3, CH_3Cl, H_2S, and HO_2. We draw particular attention to flaws in the commonly used expression n_{air}=0.79n_{N_2}+0.21n_{O_2} to determine the air-broadening temperature dependence exponent in the power law from those for nitrogen and oxygen broadening. A more meaningful approach will be presented. The semi-empirical line width, pressure shifts and temperature-dependence exponents of CO, NH_3, HF, HCl, OCS, C_2H_2, SO_2 perturbed by H_2, He, and CO_2 have been added to the database based on the algorithm described in Wilzewski et al.. The new spectroscopic parameters for HT profile were implemented into the database for hydrogen molecule. The HITRAN database is supported by the NASA AURA program grant NNX14AI55G and NASA PDART grant NNX16AG51G. I. E. Gordon, L. S. Rothman, et al., J Quant Spectrosc Radiat Transf 2017; submitted. Hill C, et al., J Quant Spectrosc Radiat Transf 2013;130:51-61. Wilzewski JS,et al., J Quant Spectrosc Radiat Transf 2016;168:193-206. Wcislo P, et al., J Quant Spectrosc Radiat Transf 2016;177:75-91.

  20. Empirical relationships between gas abundances and UV selective extinction

    NASA Technical Reports Server (NTRS)

    Joseph, Charles L.

    1990-01-01

    Several studies of gas-phase abundances in lines of sight through the outer edges of dense clouds are summarized. These lines of sight have 0.4 less than E(B-V) less than 1.1 and have inferred spatial densities of a few hundred cm(-3). The primary thrust of these studies has been to compare gaseous abundances in interstellar clouds that have various types of peculiar selective extinction. To date, the most notable result has been an empirical relationship between the CN/Fe I abundance ratio and the depth of the 2200 A extinction bump. It is not clear at the present time, however, whether these two parameters are linearly correlated or the data are organized into two discrete ensembles. Based on 19 samples and assuming the clouds form discrete ensembles, lines of sight that have a CN/Fe I abundance ratio greater than 0.3 (dex) appear to have a shallow 2.57 plus or minus 0.55 bump compared to 3.60 plus or minus 0.36 for other dense clouds and compared to the 3.6 Seaton (1979) average. The difference in the strength of the extinction bump between these two ensembles is 1.03 plus or minus 0.23. Although a high-resolution IUE survey of dense clouds is far from complete, the few lines of sight with shallow extinction bumps all show preferential depletion of certain elements, while those lines of sight with normal 2200 A bumps do not. Ca II, Cr II, and Mn II appear to exhibit the strongest preferential depletion compared to S II, P II, and Mg II. Fe II and Si II depletions also appear to be enhanced somewhat in the shallow-bump lines of sight. It should be noted that Copernicus data suggest all elements, including the so-called nondepletors, deplete in diffuse clouds (Snow and Jenkins 1980, Joseph 1988). Those lines of sight through dense clouds that have normal 2200 A extinction bumps appear to be extensions of the depletions found in the diffuse interstellar medium. That is, the overall level of depletion is enhanced, but the element-to-element abundances are similar to those in diffuse clouds. In a separate study, the abundances of neutral atoms were studied in a dense cloud having a shallow 2200 A bump and in one with a normal strength bump.

  1. Effective Temperatures for Young Stars in Binaries

    NASA Astrophysics Data System (ADS)

    Muzzio, Ryan; Avilez, Ian; Prato, Lisa A.; Biddle, Lauren I.; Allen, Thomas; Wright-Garba, Nuria Meilani Laure; Wittal, Matthew

    2017-01-01

    We have observed about 100 multi-star systems, within the star forming regions Taurus and Ophiuchus, to investigate the individual stellar and circumstellar properties of both components in young T Tauri binaries. Near-infrared spectra were collected using the Keck II telescope’s NIRSPEC spectrograph and imaging data were taken with Keck II’s NIRC2 camera, both behind adaptive optics. Some properties are straightforward to measure; however, determining effective temperature is challenging as the standard method of estimating spectral type and relating spectral type to effective temperature can be subjective and unreliable. We explicitly looked for a relationship between effective temperatures empirically determined in Mann et al. (2015) and equivalent width ratios of H-band Fe and OH lines for main sequence spectral type templates common to both our infrared observations and to the sample of Mann et al. We find a fit for a wide range of temperatures and are currently testing the validity of using this method as a way to determine effective temperature robustly. Support for this research was provided by an REU supplement to NSF award AST-1313399.

  2. Semiempirical studies of atomic structure. Progress report, 1 July 1983-1 June 1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, L.J.

    1984-01-01

    A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast ion beam excitation with semiempirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems.more » Through the acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rey, Michaël, E-mail: michael.rey@univ-reims.fr; Tyuterev, Vladimir G.; Nikitin, Andrei V.

    Accurate variational high-resolution spectra calculations in the range 0-8000 cm{sup −1} are reported for the first time for the monodeutered methane ({sup 12}CH{sub 3}D). Global calculations were performed by using recent ab initio surfaces for line positions and line intensities derived from the main isotopologue {sup 12}CH{sub 4}. Calculation of excited vibrational levels and high-J rovibrational states is described by using the normal mode Eckart-Watson Hamiltonian combined with irreducible tensor formalism and appropriate numerical procedures for solving the quantum nuclear motion problem. The isotopic H→D substitution is studied in details by means of symmetry and nonlinear normal mode coordinate transformations.more » Theoretical spectra predictions are given up to J = 25 and compared with the HITRAN 2012 database representing a compilation of line lists derived from analyses of experimental spectra. The results are in very good agreement with available empirical data suggesting that a large number of yet unassigned lines in observed spectra could be identified and modeled using the present approach.« less

  4. H2, He, and CO2 line-broadening coefficients, pressure shifts and temperature-dependence exponents for the HITRAN database. Part 1: SO2, NH3, HF, HCl, OCS and C2H2

    NASA Astrophysics Data System (ADS)

    Wilzewski, Jonas S.; Gordon, Iouli E.; Kochanov, Roman V.; Hill, Christian; Rothman, Laurence S.

    2016-01-01

    To increase the potential for use of the HITRAN database in astronomy, experimental and theoretical line-broadening coefficients, line shifts and temperature-dependence exponents of molecules of planetary interest broadened by H2, He, and CO2 have been assembled from available peer-reviewed sources. The collected data were used to create semi-empirical models so that every HITRAN line of the studied molecules has corresponding parameters. Since H2 and He are major constituents in the atmospheres of gas giants, and CO2 predominates in atmospheres of some rocky planets with volcanic activity, these spectroscopic data are important for remote sensing studies of planetary atmospheres. In this paper we make the first step in assembling complete sets of these parameters, thereby creating datasets for SO2, NH3, HF, HCl, OCS and C2H2.

  5. Efficiency and economic benefits of skipjack pole and line (huhate) in central Moluccas, Indonesia

    NASA Astrophysics Data System (ADS)

    Siahainenia, Stevanus M.; Hiariey, Johanis; Baskoro, Mulyono S.; Waeleruny, Wellem

    2017-10-01

    Excess fishing capacity is a crucial problem in marine capture fisheries. This phenomenon needed to be investigated regarding sustainability and development of the fishery. This research was aimed at analyzing technical efficiency (TE) and computing financial aspects of the skipjack pole and line. Primary data were collected from the owners of the fishing units at the different size of gross boat tonnage (GT), while secondary data were gathered from official publications relating to this research. Data envelopment analysis (DEA) approach was applied to estimate technical efficiency whereas a selected financial analysis was utilized to calculate economic benefits of the skipjack pole and line business. The fishing units with a size of 26-30 GT provided a higher TE value, and also achieved larger economic benefit values than that of the other fishing units. The empirical results indicate that skipjack pole and line in the size of 26-30 GT is a good fishing gear for the business development in central Moluccas.

  6. Developing Skills in Counselling and Psychotherapy: A Scoping Review of Interpersonal Process Recall and Reflecting Team Methods in Initial Therapist Training

    ERIC Educational Resources Information Center

    Meekums, Bonnie; Macaskie, Jane; Kapur, Tricia

    2016-01-01

    The authors conducted a scoping review of the peer-reviewed literature associated with Interpersonal Process Recall (IPR) and Reflecting Team (RT) methods in order to find evidence for their use within skills development in therapist trainings. Inclusion criteria were: empirical research, reviews of empirical research, and responses to these; RT…

  7. How Many Classroom Observations Are Sufficient? Empirical Findings in the Context of a Longitudinal Study

    ERIC Educational Resources Information Center

    Shih, Jeffrey C.; Ing, Marsha; Tarr, James E.

    2013-01-01

    One method to investigate classroom quality is for a person to observe what is happening in the classroom. However, this method raises practical and technical concerns such as how many observations to collect, when to collect these observations and who should collect these observations. The purpose of this study is to provide empirical evidence to…

  8. Revenge versus rapport: Interrogation, terrorism, and torture.

    PubMed

    Alison, Laurence; Alison, Emily

    2017-04-01

    This review begins with the historical context of harsh interrogation methods that have been used repeatedly since the Second World War. This is despite the legal, ethical and moral sanctions against them and the lack of evidence for their efficacy. Revenge-motivated interrogations (Carlsmith & Sood, 2009) regularly occur in high conflict, high uncertainty situations and where there is dehumanization of the enemy. These methods are diametrically opposed to the humanization process required for adopting rapport-based methods-for which there is an increasing corpus of studies evidencing their efficacy. We review this emerging field of study and show how rapport-based methods rely on building alliances and involve a specific set of interpersonal skills on the part of the interrogator. We conclude with 2 key propositions: (a) for psychologists to firmly maintain the Hippocratic Oath of "first do no harm," irrespective of perceived threat and uncertainty, and (b) for wider recognition of the empirical evidence that rapport-based approaches work and revenge tactics do not. Proposition (a) is directly in line with fundamental ethical principles of practice for anyone in a caring profession. Proposition (b) is based on the requirement for psychology to protect and promote human welfare and to base conclusions on objective evidence. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Calibration-free wavelength-modulation spectroscopy based on a swiftly determined wavelength-modulation frequency response function of a DFB laser.

    PubMed

    Zhao, Gang; Tan, Wei; Hou, Jiajia; Qiu, Xiaodong; Ma, Weiguang; Li, Zhixin; Dong, Lei; Zhang, Lei; Yin, Wangbao; Xiao, Liantuan; Axner, Ove; Jia, Suotang

    2016-01-25

    A methodology for calibration-free wavelength modulation spectroscopy (CF-WMS) that is based upon an extensive empirical description of the wavelength-modulation frequency response (WMFR) of DFB laser is presented. An assessment of the WMFR of a DFB laser by the use of an etalon confirms that it consists of two parts: a 1st harmonic component with an amplitude that is linear with the sweep and a nonlinear 2nd harmonic component with a constant amplitude. Simulations show that, among the various factors that affect the line shape of a background-subtracted peak-normalized 2f signal, such as concentration, phase shifts between intensity modulation and frequency modulation, and WMFR, only the last factor has a decisive impact. Based on this and to avoid the impractical use of an etalon, a novel method to pre-determine the parameters of the WMFR by fitting to a background-subtracted peak-normalized 2f signal has been developed. The accuracy of the new scheme to determine the WMFR is demonstrated and compared with that of conventional methods in CF-WMS by detection of trace acetylene. The results show that the new method provides a four times smaller fitting error than the conventional methods and retrieves concentration more accurately.

  10. Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers

    PubMed Central

    García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta

    2016-01-01

    The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine. PMID:28773653

  11. Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers.

    PubMed

    García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta

    2016-06-29

    The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine.

  12. Test/semi-empirical analysis of a carbon/epoxy fabric stiffened panel

    NASA Technical Reports Server (NTRS)

    Spier, E. E.; Anderson, J. A.

    1990-01-01

    The purpose of this work-in-progress is to present a semi-empirical analysis method developed to predict the buckling and crippling loads of carbon/epoxy fabric blade stiffened panels in compression. This is a hand analysis method comprised of well known, accepted techniques, logical engineering judgements, and experimental data that results in conservative solutions. In order to verify this method, a stiffened panel was fabricated and tested. Both the best and analysis results are presented.

  13. The First Empirical Determination of the Fe10+ and Fe13+ Freeze-in Distances in the Solar Corona

    NASA Astrophysics Data System (ADS)

    Boe, Benjamin; Habbal, Shadia; Druckmüller, Miloslav; Landi, Enrico; Kourkchi, Ehsan; Ding, Adalbert; Starha, Pavel; Hutton, Joseph

    2018-06-01

    Heavy ions are markers of the physical processes responsible for the density and temperature distribution throughout the fine-scale magnetic structures that define the shape of the solar corona. One of their properties, whose empirical determination has remained elusive, is the “freeze-in” distance (R f ) where they reach fixed ionization states that are adhered to during their expansion with the solar wind. We present the first empirical inference of R f for {Fe}}{10+} and {Fe}}{13+} derived from multi-wavelength imaging observations of the corresponding Fe XI ({Fe}}{10+}) 789.2 nm and Fe XIV ({Fe}}{13+}) 530.3 nm emission acquired during the 2015 March 20 total solar eclipse. We find that the two ions freeze-in at different heliocentric distances. In polar coronal holes (CHs) R f is around 1.45 R ⊙ for {Fe}}{10+} and below 1.25 R ⊙ for {Fe}}{13+}. Along open field lines in streamer regions, R f ranges from 1.4 to 2 R ⊙ for {Fe}}{10+} and from 1.5 to 2.2 R ⊙ for {Fe}}{13+}. These first empirical R f values: (1) reflect the differing plasma parameters between CHs and streamers and structures within them, including prominences and coronal mass ejections; (2) are well below the currently quoted values derived from empirical model studies; and (3) place doubt on the reliability of plasma diagnostics based on the assumption of ionization equilibrium beyond 1.2 R ⊙.

  14. Novel Methods for Analysing Bacterial Tracks Reveal Persistence in Rhodobacter sphaeroides

    PubMed Central

    Rosser, Gabriel; Fletcher, Alexander G.; Wilkinson, David A.; de Beyer, Jennifer A.; Yates, Christian A.; Armitage, Judith P.; Maini, Philip K.; Baker, Ruth E.

    2013-01-01

    Tracking bacteria using video microscopy is a powerful experimental approach to probe their motile behaviour. The trajectories obtained contain much information relating to the complex patterns of bacterial motility. However, methods for the quantitative analysis of such data are limited. Most swimming bacteria move in approximately straight lines, interspersed with random reorientation phases. It is therefore necessary to segment observed tracks into swimming and reorientation phases to extract useful statistics. We present novel robust analysis tools to discern these two phases in tracks. Our methods comprise a simple and effective protocol for removing spurious tracks from tracking datasets, followed by analysis based on a two-state hidden Markov model, taking advantage of the availability of mutant strains that exhibit swimming-only or reorientating-only motion to generate an empirical prior distribution. Using simulated tracks with varying levels of added noise, we validate our methods and compare them with an existing heuristic method. To our knowledge this is the first example of a systematic assessment of analysis methods in this field. The new methods are substantially more robust to noise and introduce less systematic bias than the heuristic method. We apply our methods to tracks obtained from the bacterial species Rhodobacter sphaeroides and Escherichia coli. Our results demonstrate that R. sphaeroides exhibits persistence over the course of a tumbling event, which is a novel result with important implications in the study of this and similar species. PMID:24204227

  15. Method of empirical dependences in estimation and prediction of activity of creatine kinase isoenzymes in cerebral ischemia

    NASA Astrophysics Data System (ADS)

    Sergeeva, Tatiana F.; Moshkova, Albina N.; Erlykina, Elena I.; Khvatova, Elena M.

    2016-04-01

    Creatine kinase is a key enzyme of energy metabolism in the brain. There are known cytoplasmic and mitochondrial creatine kinase isoenzymes. Mitochondrial creatine kinase exists as a mixture of two oligomeric forms - dimer and octamer. The aim of investigation was to study catalytic properties of cytoplasmic and mitochondrial creatine kinase and using of the method of empirical dependences for the possible prediction of the activity of these enzymes in cerebral ischemia. Ischemia was revealed to be accompanied with the changes of the activity of creatine kinase isoenzymes and oligomeric state of mitochondrial isoform. There were made the models of multiple regression that permit to study the activity of creatine kinase system in cerebral ischemia using a calculating method. Therefore, the mathematical method of empirical dependences can be applied for estimation and prediction of the functional state of the brain by the activity of creatine kinase isoenzymes in cerebral ischemia.

  16. Evapotranspiration Calculations for an Alpine Marsh Meadow Site in Three-river Headwater Region

    NASA Astrophysics Data System (ADS)

    Zhou, B.; Xiao, H.

    2016-12-01

    Daily radiation and meteorological data were collected at an alpine marsh meadow site in the Three-river Headwater Region(THR). Use them to assess radiation models determined after comparing the performance between Zuo model and the model recommend by FAO56P-M.Four methods, FAO56P-M, Priestley-Taylor, Hargreaves, and Makkink methods were applied to determine daily reference evapotranspiration( ETr) for the growing season and built the empirical models for estimating daily actual evapotranspiration ETa between ETr derived from the four methods and evapotranspiration derived from Bowen Ratio method on alpine marsh meadow in this region. After comparing the performance of four empirical models by RMSE, MAE and AI, it showed these models all can get the better estimated daily ETaon alpine marsh meadow in this region, and the best performance of the FAO56 P-M, Makkink empirical model were better than Priestley-Taylor and Hargreaves model.

  17. A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    1999-01-01

    During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.

  18. Comparison of safety effect estimates obtained from empirical Bayes before-after study, propensity scores-potential outcomes framework, and regression model with cross-sectional data.

    PubMed

    Wood, Jonathan S; Donnell, Eric T; Porter, Richard J

    2015-02-01

    A variety of different study designs and analysis methods have been used to evaluate the performance of traffic safety countermeasures. The most common study designs and methods include observational before-after studies using the empirical Bayes method and cross-sectional studies using regression models. The propensity scores-potential outcomes framework has recently been proposed as an alternative traffic safety countermeasure evaluation method to address the challenges associated with selection biases that can be part of cross-sectional studies. Crash modification factors derived from the application of all three methods have not yet been compared. This paper compares the results of retrospective, observational evaluations of a traffic safety countermeasure using both before-after and cross-sectional study designs. The paper describes the strengths and limitations of each method, focusing primarily on how each addresses site selection bias, which is a common issue in observational safety studies. The Safety Edge paving technique, which seeks to mitigate crashes related to roadway departure events, is the countermeasure used in the present study to compare the alternative evaluation methods. The results indicated that all three methods yielded results that were consistent with each other and with previous research. The empirical Bayes results had the smallest standard errors. It is concluded that the propensity scores with potential outcomes framework is a viable alternative analysis method to the empirical Bayes before-after study. It should be considered whenever a before-after study is not possible or practical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Comparison of measured efficiencies of nine turbine designs with efficiencies predicted by two empirical methods

    NASA Technical Reports Server (NTRS)

    English, Robert E; Cavicchi, Richard H

    1951-01-01

    Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.

  1. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  2. Atomic Data for the K-vacancy States of Fe XXIV

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Mendoza, C.; Kallman, T. R.; Palmeri, P.

    2003-01-01

    As part of a project to compute improved atomic data for the spectral modeling of iron K lines, we report extensive calculations and comparisons of atomic data for K-vacancy states in Fe XXIV. The data sets include: (i) energy levels, line wavelengths, radiative and Auger rates; (ii) inner-shell electron impact excitation rates and (iii) fine structure inner-shell photoionization cross sections. The calculations of energy levels and radiative and Auger rates have involved a detailed study of orbital representations, core relaxation, configuration interaction, relativistic corrections, cancellation effects and semi-empirical corrections. It is shown that a formal treatment of the Breit interaction is essential to render the important magnetic correlations that take part in the decay pathways of this ion. As a result, the accuracy of the present A-values is firmly ranked at better than 10% while that of the Auger rates at only 15%. The calculations of collisional excitation and photoionization cross sections take into account the effects of radiation and spectator Auger dampings. In the former, these effects cause significant attenuation of resonances leading to a good agreement with a simpler method where resonances are excluded. In the latter, resonances converging to the K threshold display symmetric profiles of constant width that causes edge smearing.

  3. Atomic Data for the K-Vacancy States of Fe XXIV

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Mendoza, C.; Kallman, T. R.; Palmeri, P.

    2002-01-01

    As part of a project to compute improved atomic data for the spectral modeling of iron K lines, we report extensive calculations and comparisons of atomic data for K-vacancy states in Fe XXIV. The data sets include: (i) energy levels, line wavelengths, radiative and Auger rates; (ii) inner-shell electron impact excitation rates and (iii) fine structure inner-shell photoionization cross sections. The calculations of energy levels and radiative and Auger rates have involved a detailed study of orbital representations, core relaxation, configuration interaction, relativistic corrections, cancellation effects and semi-empirical corrections. It is shown that a formal treatment of the Breit interaction is essential to render the important magnetic correlations that take part in the decay pathways of this ion. As a result, the accuracy of the present A-values is firmly ranked at better than 10% while that of the Auger rates at only 15%. The calculations of collisional excitation and photoionization cross sections take into account the effects of radiation and spectator Auger dampings. In the former, these effects cause significant attenuation of resonances leading to a good agreement with a simpler method where resonances are excluded. In the latter, resonances converging to the K threshold display symmetric profiles of constant width that causes edge smearing.

  4. Origin of Lβ20 satellite in higher Z elements

    NASA Astrophysics Data System (ADS)

    Trivedi, Rajeev K.; Kendurkar, Renuka; Shrivastava, B. D.

    2017-05-01

    One of the satellite lines accompanied with the intense diagram line Lβ2 (L3-N5) on the higher energy side, is the satellite β20 in the elements from 71Lu to 84Po, 88Ra, 90Th and 92U. Shahlot and Soni have theoretically investigated this satellite and have found all the possible transitions using jj coupling scheme using Hartree-Fock-Slater formulae. A perusal of their results shows that in some cases the agreement between theoretical and experimental values is not so good. Hence, in the present investigation we have tried alternative calculations by using the tables of Parente et al. While these calculations are relativistic ab initio calculations, those of Shahlot and Soni are non-relativistic semi-empirical calculations. Considering the same grouping of transition schemes as assigned by Shahlot and Soni, calculations have been done by us using the tables of Parente et al, which gives the values of transition energies only for the 11 elements. The transition energies for intermediate elements have been calculated by us by linear interpolation method. Our calculations show better agreement with the experimental values than that obtained from the values of Shahlot and Soni. However, in some cases, our calculations also do not yield good results and this has been discussed.

  5. Multiple pure tone noise prediction

    NASA Astrophysics Data System (ADS)

    Han, Fei; Sharma, Anupam; Paliath, Umesh; Shieh, Chingwei

    2014-12-01

    This paper presents a fully numerical method for predicting multiple pure tones, also known as “Buzzsaw” noise. It consists of three steps that account for noise source generation, nonlinear acoustic propagation with hard as well as lined walls inside the nacelle, and linear acoustic propagation outside the engine. Noise generation is modeled by steady, part-annulus computational fluid dynamics (CFD) simulations. A linear superposition algorithm is used to construct full-annulus shock/pressure pattern just upstream of the fan from part-annulus CFD results. Nonlinear wave propagation is carried out inside the duct using a pseudo-two-dimensional solution of Burgers' equation. Scattering from nacelle lip as well as radiation to farfield is performed using the commercial solver ACTRAN/TM. The proposed prediction process is verified by comparing against full-annulus CFD simulations as well as against static engine test data for a typical high bypass ratio aircraft engine with hardwall as well as lined inlets. Comparisons are drawn against nacelle unsteady pressure transducer measurements at two axial locations as well as against near- and far-field microphone array measurements outside the duct. This is the first fully numerical approach (no experimental or empirical input is required) to predict multiple pure tone noise generation, in-duct propagation and far-field radiation. It uses measured blade coordinates to calculate MPT noise.

  6. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  7. In My Own Time: Tuition Fees, Class Time and Student Effort in Non-Formal (Or Continuing) Education

    ERIC Educational Resources Information Center

    Bolli, Thomas; Johnes, Geraint

    2015-01-01

    We develop and empirically test a model which examines the impact of changes in class time and tuition fees on student effort in the form of private study. The data come from the European Union's Adult Education Survey, conducted over the period 2005-2008. We find, in line with theoretical predictions, that the time students devote to private…

  8. Astronomy in Inca Empire: a Ceque Based Calendar

    NASA Astrophysics Data System (ADS)

    Correa, Nathalia Silva Gomes; de Nader, R. V.

    2007-08-01

    This work is a brief report about different kinds of arrangements and organization of the Inca astronomical calendar, approaching archaeological vestiges in Cuzco, such as observatories aligned to celestial objects which were observed for the computation of the time. We also analyze the ceques lines that can be associated to these techniques of Inca astronomical observation, according to the chroniclers and the researches in archaeoastronomy.

  9. Arts Education as a Vehicle for Social Change: An Empirical Study of Eco Arts in the K-12 Classroom

    ERIC Educational Resources Information Center

    Sams, Jeniffer; Sams, Doreen

    2017-01-01

    Arts education has been part of the United States K-12 educational system for over a century. However, recent administrative policy decisions addressed the economic bottom line and the 1983 report, "A Nation at Risk," and complied with the "No Child Left Behind (NCLB) Act of 2001" (U.S. Department of Education, 2001). These…

  10. A multiple indicator solution approach to endogeneity in discrete-choice models for environmental valuation.

    PubMed

    Mariel, Petr; Hoyos, David; Artabe, Alaitz; Guevara, C Angelo

    2018-08-15

    Endogeneity is an often neglected issue in empirical applications of discrete choice modelling despite its severe consequences in terms of inconsistent parameter estimation and biased welfare measures. This article analyses the performance of the multiple indicator solution method to deal with endogeneity arising from omitted explanatory variables in discrete choice models for environmental valuation. We also propose and illustrate a factor analysis procedure for the selection of the indicators in practice. Additionally, the performance of this method is compared with the recently proposed hybrid choice modelling framework. In an empirical application we find that the multiple indicator solution method and the hybrid model approach provide similar results in terms of welfare estimates, although the multiple indicator solution method is more parsimonious and notably easier to implement. The empirical results open a path to explore the performance of this method when endogeneity is thought to have a different cause or under a different set of indicators. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Identification of Variables and Factors Impacting Consumer Behavior in On-line Shopping in India: An Empirical Study

    NASA Astrophysics Data System (ADS)

    Chhikara, Sudesh

    On-line shopping is a recent phenomenon in the field of E-Business and is definitely going to be the future of shopping in the world. Most of the companies are running their on-line portals to sell their products/services. Though online shopping is very common outside India, its growth in Indian Market, which is a large and strategic consumer market, is still not in line with the global market. The potential growth of on-line shopping has triggered the idea of conducting a study on on-line shopping in India. The present research paper has used exploratory study to depict and highlight the various categories of factors and variables impacting the behavior of consumers towards on-line shopping in India. The data was collected through in-depth interviews on a sample of 41 respondents from Delhi, Mumbai, Chennai and Bangalore. The results of the study show that on-line shopping in India is basically impacted by five categories of factors like demographics factor, Psychographics factor, Online shopping feature and policies, Technological factor, Security factor. The results of the study are used to present a comprehensive model of on-line shopping which could be further used by the researchers and practitioners for conducting future studies in the similar area. A brief operational definition of all the factors and variables impacting on-line shopping in India is also described. And finally practical implications of the study are also elucidated.

  12. The star formation rate cookbook at 1 < z < 3: Extinction-corrected relations for UV and [OII]λ3727 luminosities

    NASA Astrophysics Data System (ADS)

    Talia, M.; Cimatti, A.; Pozzetti, L.; Rodighiero, G.; Gruppioni, C.; Pozzi, F.; Daddi, E.; Maraston, C.; Mignoli, M.; Kurk, J.

    2015-10-01

    Aims: In this paper we use a well-controlled spectroscopic sample of galaxies at 1

  13. High Velocity Jet Noise Source Location and Reduction. Task 3 - Experimental Investigation of Suppression Principles. Volume I. Suppressor Concepts Optimization

    DTIC Science & Technology

    1978-12-01

    multinational corporation in the 1960’s placed extreme emphasis on the need for effective and efficient noise suppression devices. Phase I of work...through model and engine testing applicable to an afterburning turbojet engine. Suppressor designs were based primarily on empirical methods. Phase II...using "ray" acoustics. This method is in contrast to the purely empirical method which consists of the curve -fitting of normalized data. In order to

  14. Analysis methods for Kevlar shield response to rotor fragments

    NASA Technical Reports Server (NTRS)

    Gerstle, J. H.

    1977-01-01

    Several empirical and analytical approaches to rotor burst shield sizing are compared and principal differences in metal and fabric dynamic behavior are discussed. The application of transient structural response computer programs to predict Kevlar containment limits is described. For preliminary shield sizing, present analytical methods are useful if insufficient test data for empirical modeling are available. To provide other information useful for engineering design, analytical methods require further developments in material characterization, failure criteria, loads definition, and post-impact fragment trajectory prediction.

  15. Empirical Methods for Identifying Specific Peptide-protein Interactions for Smart Reagent Development

    DTIC Science & Technology

    2012-09-01

    orientated immobilization of proteins,” Biotechnology Progress, 22(2), 401-405 ( 2006 ). [26] J. M. Kogot, D. A. Sarkes , I. Val-Addo et al...Empirical Methods for Identifying Specific Peptide-protein Interactions for Smart Reagent Development by Joshua M. Kogot, Deborah A. Sarkes ...Peptide-protein Interactions for Smart Reagent Development Joshua M. Kogot, Deborah A. Sarkes , Dimitra N. Stratis-Cullum, and Paul M

  16. Information-Processing Theory and Perspectives on Development: A Look at Concepts and Methods--The View of a Developmental Ethologist.

    ERIC Educational Resources Information Center

    Jesness, Bradley

    This paper examines concepts in information-processing theory which are likely to be relevant to development and characterizes the methods and data upon which the concepts are based. Among the concepts examined are those which have slight empirical grounds. Other concepts examined are those which seem to have empirical bases but which are…

  17. Delimiting the Unconceived

    NASA Astrophysics Data System (ADS)

    Dawid, Richard

    2018-01-01

    It has been argued in Dawid (String theory and the scientific method, Cambridge University Press, Cambridge, [4]) that physicists at times generate substantial trust in an empirically unconfirmed theory based on observations that lie beyond the theory's intended domain. A crucial role in the reconstruction of this argument of "non-empirical confirmation" is played by limitations to scientific underdetermination. The present paper discusses the question as to how generic the role of limitations to scientific underdetermination really is. It is argued that assessing such limitations is essential for generating trust in any theory's predictions, be it empirically confirmed or not. The emerging view suggests that empirical and non-empirical confirmation are more closely related to each other than one may expect at first glance.

  18. Delimiting the Unconceived

    NASA Astrophysics Data System (ADS)

    Dawid, Richard

    2018-05-01

    It has been argued in Dawid (String theory and the scientific method, Cambridge University Press, Cambridge, [4]) that physicists at times generate substantial trust in an empirically unconfirmed theory based on observations that lie beyond the theory's intended domain. A crucial role in the reconstruction of this argument of "non-empirical confirmation" is played by limitations to scientific underdetermination. The present paper discusses the question as to how generic the role of limitations to scientific underdetermination really is. It is argued that assessing such limitations is essential for generating trust in any theory's predictions, be it empirically confirmed or not. The emerging view suggests that empirical and non-empirical confirmation are more closely related to each other than one may expect at first glance.

  19. Sliding down the U-shape? A dynamic panel investigation of the age-well-being relationship, focusing on young adults.

    PubMed

    Piper, Alan T

    2015-10-01

    Much of the work within economics attempting to understand the relationship between age and well-being has focused on the U-shape, whether it exists and, more recently, potential reasons for its existence. This paper focuses on one part of the lifecycle rather than the whole: young people. This focus offers a better understanding of the age-well-being relationship for young people, and helps with increasing general understanding regarding the U-shape itself. The empirical estimations employ both static and dynamic panel estimations, with the latter preferred for several reasons. The empirical results are in line with the U-shape, and the results from the dynamic analysis indicate that this result is a lifecycle effect. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. An Empirical Model of the Variations of the Solar Lyman-Alpha Spectral Irradiance

    NASA Astrophysics Data System (ADS)

    Kretzschmar, M.; Snow, M. A.; Curdt, W.

    2017-12-01

    We propose a simple model that computes the spectral profile of the solar irradiance in the Hydrogen Lyman alpha line, H Ly-α (121.567nm), from 1947 to present. Such a model is relevant for the study of many astronomical environments, from planetary atmospheres to interplanetary medium, and can be used to improve the analysis of data from mission like MAVEN or GOES-16. This empirical model is based on the SOHO/SUMER observations of the Ly-α irradiance over solar cycle 23, which we analyze in details, and relies on the Ly-α integrated irradiance composite. The model reproduces the temporal variability of the spectral profile and matches the independent SORCE/SOSLTICE spectral observations from 2003 to 2007 with an accuracy better than 10%.

  1. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  2. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  3. ECVAM and new technologies for toxicity testing.

    PubMed

    Bouvier d'Yvoire, Michel; Bremer, Susanne; Casati, Silvia; Ceridono, Mara; Coecke, Sandra; Corvi, Raffaella; Eskes, Chantra; Gribaldo, Laura; Griesinger, Claudius; Knaut, Holger; Linge, Jens P; Roi, Annett; Zuang, Valérie

    2012-01-01

    The development of alternative empirical (testing) and non-empirical (non-testing) methods to traditional toxicological tests for complex human health effects is a tremendous task. Toxicants may potentially interfere with a vast number of physiological mechanisms thereby causing disturbances on various levels of complexity of human physiology. Only a limited number of mechanisms relevant for toxicity ('pathways' of toxicity) have been identified with certainty so far and, presumably, many more mechanisms by which toxicants cause adverse effects remain to be identified. Recapitulating in empirical model systems (i.e., in vitro test systems) all those relevant physiological mechanisms prone to be disturbed by toxicants and relevant for causing the toxicity effect in question poses an enormous challenge. First, the mechanism(s) of action of toxicants in relation to the most relevant adverse effects of a specific human health endpoint need to be identified. Subsequently, these mechanisms need to be modeled in reductionist test systems that allow assessing whether an unknown substance may operate via a specific (array of) mechanism(s). Ideally, such test systems should be relevant for the species of interest, i.e., based on human cells or modeling mechanisms present in humans. Since much of our understanding about toxicity mechanisms is based on studies using animal model systems (i.e., experimental animals or animal-derived cells), designing test systems that model mechanisms relevant for the human situation may be limited by the lack of relevant information from basic research. New technologies from molecular biology and cell biology, as well as progress in tissue engineering, imaging techniques and automated testing platforms hold the promise to alleviate some of the traditional difficulties associated with improving toxicity testing for complex endpoints. Such new technologies are expected (1) to accelerate the identification of toxicity pathways with human relevance that need to be modeled in test methods for toxicity testing (2) to enable the reconstruction of reductionist test systems modeling at a reduced level of complexity the target system/organ of interest (e.g., through tissue engineering, use of human-derived cell lines and stem cells etc.), (3) to allow the measurement of specific mechanisms relevant for a given health endpoint in such test methods (e.g., through gene and protein expression, changes in metabolites, receptor activation, changes in neural activity etc.), (4) to allow to measure toxicity mechanisms at higher throughput rates through the use of automated testing. In this chapter, we discuss the potential impact of new technologies on the development, optimization and use of empirical testing methods, grouped according to important toxicological endpoints. We highlight, from an ECVAM perspective, the areas of topical toxicity, skin absorption, reproductive and developmental toxicity, carcinogenicity/genotoxicity, sensitization, hematopoeisis and toxicokinetics and discuss strategic developments including ECVAM's database service on alternative methods. Neither the areas of toxicity discussed nor the highlighted new technologies represent comprehensive listings which would be an impossible endeavor in the context of a book chapter. However, we feel that these areas are of utmost importance and we predict that new technologies are likely to contribute significantly to test development in these fields. We summarize which new technologies are expected to contribute to the development of new alternative testing methods over the next few years and point out current and planned ECVAM projects for each of these areas.

  4. X-shooter spectroscopy of young stellar objects. VI. H I line decrements

    NASA Astrophysics Data System (ADS)

    Antoniucci, S.; Nisini, B.; Giannini, T.; Rigliaco, E.; Alcalá, J. M.; Natta, A.; Stelzer, B.

    2017-03-01

    Context. Hydrogen recombination emission lines commonly observed in accreting young stellar objects represent a powerful tracer for the gas conditions in the circumstellar structures (accretion columns, and winds or jets). Aims: Here we perform a study of the H I decrements and line profiles, from the Balmer and Paschen H I lines detected in the X-shooter spectra of a homogeneous sample of 36 T Tauri objects in Lupus, the accretion and stellar properties of which were already derived in a previous work. We aim to obtain information on the H I gas physical conditions to delineate a consistent picture of the H I emission mechanisms in pre-main sequence low-mass stars (M∗< 2 M⊙). Methods: We have empirically classified the sources based on their H I line profiles and decrements. We identified four Balmer decrement types (which we classified as 1, 2, 3, and 4) and three Paschen decrement types (A, B, and C), characterised by different shapes. We first discussed the connection between the decrement types and the source properties and then compared the observed decrements with predictions from recently published local line excitation models. Results: We identify a few groups of sources that display similar H I properties. One third of the objects show lines with narrow symmetric profiles, and present similar Balmer and Paschen decrements (straight decrements, types 2 and A). Lines in these sources are consistent with optically thin emission from gas with hydrogen densities of order 109 cm-3 and 5000 < T < 15 000 K. These objects are associated with low mass accretion rates. Type 4 (L-shaped) Balmer and type B Paschen decrements are found in conjunction with very wide line profiles and are characteristic of strong accretors, with optically thick emission from high-density gas (log nH > 11 cm-3). Type 1 (curved) Balmer decrements are observed only in three sub-luminous sources viewed edge-on, so we speculate that these are actually type 2 decrements that are reddened because of neglecting a residual amount of extinction in the line emission region. About 20% of the objects present type 3 Balmer decrements (bumpy), which, however, cannot be reproduced with current models. Based on observations collected at the European Southern Observatory at Paranal, Chile, under programmes 084.C-0269(A), 085.C-238(A), 086.C-0173(A), 087.C-0244(A), and 089.C-0143(A).

  5. Device-associated infections in the pediatric intensive care unit at the American University of Beirut Medical Center.

    PubMed

    Ismail, Ali; El-Hage-Sleiman, Abdul-Karim; Majdalani, Marianne; Hanna-Wakim, Rima; Kanj, Souha; Sharara-Chami, Rana

    2016-06-30

    Device-associated healthcare-associated infections (DA-HAIs) are the principal threat to patient safety in intensive care units (ICUs).  The primary objective of this study was to identify the most common DA-HAIs in the pediatric intensive care unit (PICU) at the American University of Beirut Medical Center (AUBMC). Length of stay (LOS) and mortality, antimicrobial resistance patterns, and suitability of empiric antibiotic choices for DA-HAIs according to the local resistance patterns were also studied. This was a retrospective study that included all patients admitted to the PICU at AUBMC between January 2007 and December 2011. All patients admitted to the PICU having a placed central line, an endotracheal tube, and/or a Foley catheter were included. Data was extracted from the patients' medical records through chart review. A total of 22 patients were identified with 25 central line-associated bloodstream infections (CLABSI), 25 ventilator-associated pneumonia (VAP), and 9 catheter-associated urinary tract infections (CAUTIs). The causing organisms, their resistance patterns, and the appropriateness of empiric antimicrobial therapy were reported. Gram-negative pathogens were found in 53% of the DA-HAIs, Gram-positive ones in 27%, and fungal organisms in 20%. A total of 80% of K. pneumonia isolates were extended-spectrum beta-lactamases (ESBL) producers, and 30% of Pseudomonas isolates were multidrug resistant. No methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant enterococci (VRE) were isolated. Based on culture results, the choice of empiric antimicrobial therapy was appropriate in 64% of the DA-HAIs. After the care bundle approach is adopted in our PICU, DA-HAIs are expected to decrease further.

  6. Finding the Onset of Convection in Main Sequence Stars

    NASA Technical Reports Server (NTRS)

    Simon, Theodore

    2003-01-01

    The primary goal of the work performed under this grant was to locate, if possible, the onset of subphotospheric convection zones in normal main sequence stars by using the presence of emission in high temperature lines in far ultraviolet spectra from the FUSE spacecraft as a proxy for convection. The change in stellar structure represented by this boundary between radiative and convective stars has always been difficult to find by other empirical means. A search was conducted through observations of a sample of A-type stars, which were somewhat hotter and more massive than the Sun, and which were carefully chosen to bridge the theoretically expected radiative/convective boundary line along the main sequence.

  7. An isocenter estimation tool for proton gantry alignment

    NASA Astrophysics Data System (ADS)

    Hansen, Peter; Hu, Dongming

    2017-12-01

    A novel tool has been developed to automate the process of locating the isocenter, center of rotation, and sphere of confusion of a proton therapy gantry. The tool uses a Radian laser tracker to estimate how the coordinate frame of the front-end beam-line components changes as the gantry rotates. The coordinate frames serve as an empirical model of gantry flexing. Using this model, the alignment of the front and back-end beam-line components can be chosen to minimize the sphere of confusion, improving the overall beam positioning accuracy of the gantry. This alignment can be performed without the beam active, improving the efficiency of installing new systems at customer sites.

  8. The globular cluster system of NGC 1316. IV. Nature of the star cluster complex SH2

    NASA Astrophysics Data System (ADS)

    Richtler, T.; Husemann, B.; Hilker, M.; Puzia, T. H.; Bresolin, F.; Gómez, M.

    2017-05-01

    Context. The light of the merger remnant NGC 1316 (Fornax A) is dominated by old and intermediate-age stars. The only sign of current star formation in this big galaxy is the Hii region SH2, an isolated star cluster complex with a ring-like morphology and an estimated age of 0.1 Gyr at a galactocentric distance of about 35 kpc. A nearby intermediate-age globular cluster, surrounded by weak line emission and a few more young star clusters, is kinematically associated. The origin of this complex is enigmatic. Aims: We want to investigate the nature of this star cluster complex. The nebular emission lines permit a metallicity determination which can discriminate between a dwarf galaxy or other possible precursors. Methods: We used the Integral Field Unit (IFU) of the VIMOS instrument at the Very Large Telescope of the European Southern Observatory in high dispersion mode to study the morphology, kinematics, and metallicity employing line maps, velocity maps, and line diagnostics of a few characteristic spectra. Results: The line ratios of different spectra vary, indicating highly structured Hii regions, but define a locus of uniform metallicity. The strong-line diagnostic diagrams and empirical calibrations point to a nearly solar or even super-solar oxygen abundance. The velocity dispersion of the gas is highest in the region offset from the bright clusters. Star formation may be active on a low level. There is evidence for a large-scale disk-like structure in the region of SH2, which would make the similar radial velocity of the nearby globular cluster easier to understand. Conclusions: The high metallicity does not fit to a dwarf galaxy as progenitor. We favour the scenario of a free-floating gaseous complex having its origin in the merger 2 Gyr ago. Over a long period the densities increased secularly until finally the threshold for star formation was reached. SH2 illustrates how massive star clusters can form outside starbursts and without a considerable field population. Based on observations taken at the European Southern Observatory, Cerro Paranal, Chile, under the programme 082.B-0680, 076.B-0154, 065.N-0166, 065.N-0459.

  9. The Universe Is Reionizing at z ∼ 7: Bayesian Inference of the IGM Neutral Fraction Using Lyα Emission from Galaxies

    NASA Astrophysics Data System (ADS)

    Mason, Charlotte A.; Treu, Tommaso; Dijkstra, Mark; Mesinger, Andrei; Trenti, Michele; Pentericci, Laura; de Barros, Stephane; Vanzella, Eros

    2018-03-01

    We present a new flexible Bayesian framework for directly inferring the fraction of neutral hydrogen in the intergalactic medium (IGM) during the Epoch of Reionization (EoR, z ∼ 6–10) from detections and non-detections of Lyman Alpha (Lyα) emission from Lyman Break galaxies (LBGs). Our framework combines sophisticated reionization simulations with empirical models of the interstellar medium (ISM) radiative transfer effects on Lyα. We assert that the Lyα line profile emerging from the ISM has an important impact on the resulting transmission of photons through the IGM, and that these line profiles depend on galaxy properties. We model this effect by considering the peak velocity offset of Lyα lines from host galaxies’ systemic redshifts, which are empirically correlated with UV luminosity and redshift (or halo mass at fixed redshift). We use our framework on the sample of LBGs presented in Pentericci et al. and infer a global neutral fraction at z ∼ 7 of {\\overline{x}}{{H}{{I}}}={0.59}-0.15+0.11, consistent with other robust probes of the EoR and confirming that reionization is ongoing ∼700 Myr after the Big Bang. We show that using the full distribution of Lyα equivalent width detections and upper limits from LBGs places tighter constraints on the evolving IGM than the standard Lyα emitter fraction, and that larger samples are within reach of deep spectroscopic surveys of gravitationally lensed fields and James Webb Space Telescope NIRSpec.

  10. Lifetime measurements and oscillator strengths in singly ionized scandium and the solar abundance of scandium

    NASA Astrophysics Data System (ADS)

    Pehlivan Rhodin, A.; Belmonte, M. T.; Engström, L.; Lundberg, H.; Nilsson, H.; Hartman, H.; Pickering, J. C.; Clear, C.; Quinet, P.; Fivet, V.; Palmeri, P.

    2017-12-01

    The lifetimes of 17 even-parity levels (3d5s, 3d4d, 3d6s and 4p2) in the region 57 743-77 837 cm-1 of singly ionized scandium (Sc II) were measured by two-step time-resolved laser induced fluorescence spectroscopy. Oscillator strengths of 57 lines from these highly excited upper levels were derived using a hollow cathode discharge lamp and a Fourier transform spectrometer. In addition, Hartree-Fock calculations where both the main relativistic and core-polarization effects were taken into account were carried out for both low- and high-excitation levels. There is a good agreement for most of the lines between our calculated branching fractions and the measurements of Lawler & Dakin in the region 9000-45 000 cm-1 for low excitation levels and with our measurements for high excitation levels in the region 23 500-63 100 cm-1. This, in turn, allowed us to combine the calculated branching fractions with the available experimental lifetimes to determine semi-empirical oscillator strengths for a set of 380 E1 transitions in Sc II. These oscillator strengths include the weak lines that were used previously to derive the solar abundance of scandium. The solar abundance of scandium is now estimated to logε⊙ = 3.04 ± 0.13 using these semi-empirical oscillator strengths to shift the values determined by Scott et al. The new estimated abundance value is in agreement with the meteoritic value (logεmet = 3.05 ± 0.02) of Lodders, Palme & Gail.

  11. A comparison of entropy balance and probability weighting methods to generalize observational cohorts to a population: a simulation and empirical example.

    PubMed

    Harvey, Raymond A; Hayden, Jennifer D; Kamble, Pravin S; Bouchard, Jonathan R; Huang, Joanna C

    2017-04-01

    We compared methods to control bias and confounding in observational studies including inverse probability weighting (IPW) and stabilized IPW (sIPW). These methods often require iteration and post-calibration to achieve covariate balance. In comparison, entropy balance (EB) optimizes covariate balance a priori by calibrating weights using the target's moments as constraints. We measured covariate balance empirically and by simulation by using absolute standardized mean difference (ASMD), absolute bias (AB), and root mean square error (RMSE), investigating two scenarios: the size of the observed (exposed) cohort exceeds the target (unexposed) cohort and vice versa. The empirical application weighted a commercial health plan cohort to a nationally representative National Health and Nutrition Examination Survey target on the same covariates and compared average total health care cost estimates across methods. Entropy balance alone achieved balance (ASMD ≤ 0.10) on all covariates in simulation and empirically. In simulation scenario I, EB achieved the lowest AB and RMSE (13.64, 31.19) compared with IPW (263.05, 263.99) and sIPW (319.91, 320.71). In scenario II, EB outperformed IPW and sIPW with smaller AB and RMSE. In scenarios I and II, EB achieved the lowest mean estimate difference from the simulated population outcome ($490.05, $487.62) compared with IPW and sIPW, respectively. Empirically, only EB differed from the unweighted mean cost indicating IPW, and sIPW weighting was ineffective. Entropy balance demonstrated the bias-variance tradeoff achieving higher estimate accuracy, yet lower estimate precision, compared with IPW methods. EB weighting required no post-processing and effectively mitigated observed bias and confounding. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. [Imputation methods for missing data in educational diagnostic evaluation].

    PubMed

    Fernández-Alonso, Rubén; Suárez-Álvarez, Javier; Muñiz, José

    2012-02-01

    In the diagnostic evaluation of educational systems, self-reports are commonly used to collect data, both cognitive and orectic. For various reasons, in these self-reports, some of the students' data are frequently missing. The main goal of this research is to compare the performance of different imputation methods for missing data in the context of the evaluation of educational systems. On an empirical database of 5,000 subjects, 72 conditions were simulated: three levels of missing data, three types of loss mechanisms, and eight methods of imputation. The levels of missing data were 5%, 10%, and 20%. The loss mechanisms were set at: Missing completely at random, moderately conditioned, and strongly conditioned. The eight imputation methods used were: listwise deletion, replacement by the mean of the scale, by the item mean, the subject mean, the corrected subject mean, multiple regression, and Expectation-Maximization (EM) algorithm, with and without auxiliary variables. The results indicate that the recovery of the data is more accurate when using an appropriate combination of different methods of recovering lost data. When a case is incomplete, the mean of the subject works very well, whereas for completely lost data, multiple imputation with the EM algorithm is recommended. The use of this combination is especially recommended when data loss is greater and its loss mechanism is more conditioned. Lastly, the results are discussed, and some future lines of research are analyzed.

  13. A prediction method for broadband shock associated noise from supersonic rectangualr jets

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Reddy, N. N.

    1993-01-01

    Braodband shock associated noise is an important aircraft noise component of the proposed high-speed civil transport (HSCT) at take-offs and landings. For noise certification purpose one would, therefore, like to be able to predict as accurately as possible the intensity, directivity and spectral content of this noise component. The purpose of this work is to develop a semi-empirical prediction method for the broadband shock associated noise from supersonic rectangular jets. The complexity and quality of the noise prediction method are to be similar to those for circular jets. In this paper only the broadband shock associated noise of jets issued from rectangular nozzles with straight side walls is considered. Since many current aircraft propulsion systems have nozzle aspect ratios (at nozzle exit) in the range of 1 to 4, the present study has been confined to nozzles with aspect ratio less than 6. In developing the prediction method the essential physics of the problem are taken into consideration. Since the braodband shock associated noise generation mechanism is the same whether the jet is circular or round the present prediction method in a number of ways is quite similar to that for axisymmetric jets. Comparisons between predictions and measurements for jets with aspect ratio up to 6 will be reported. Efforts will be concentrated on the fly-over plane. However, side line angles and other directions will also be included.

  14. Simulation to coating weight control for galvanizing

    NASA Astrophysics Data System (ADS)

    Wang, Junsheng; Yan, Zhang; Wu, Kunkui; Song, Lei

    2013-05-01

    Zinc coating weight control is one of the most critical issues for continuous galvanizing line. The process has the characteristic of variable-time large time delay, nonlinear, multivariable. It can result in seriously coating weight error and non-uniform coating. We develop a control system, which can automatically control the air knives pressure and its position to give a constant and uniform zinc coating, in accordance with customer-order specification through an auto-adaptive empirical model-based feed forward adaptive controller, and two model-free adaptive feedback controllers . The proposed models with controller were applied to continuous galvanizing line (CGL) at Angang Steel Works. By the production results, the precise and stability of the control model reduces over-coating weight and improves coating uniform. The product for this hot dip galvanizing line does not only satisfy the customers' quality requirement but also save the zinc consumption.

  15. The P K-near edge absorption spectra of phosphates

    NASA Astrophysics Data System (ADS)

    Franke, R.; Hormes, J.

    1995-12-01

    The X-ray absorption near edge structure (XANES) at the P K-edge in several orthophosphates with various cations, in condensed, and in substituted sodium phosphates have been measured using synchrotron radiation from the ELSA storage ring at the University of Bonn. The measured spectra demonstrate that chemical changes beyond the PO 4- tetrahedra are reflected by energy shifts of the pre-edge and continuum resonances, by the presence of characteristic shoulders and new peaks and by differences in the intensity of the white line. We discuss the energy differences between the white line positions and the corresponding P ls binding energies as a measure of half of the energy gap. The corresponding values correlate with the valence of the cations and the intensity of the white lines. The energy positions of the continuum resonances are discussed on the basis of an empirical bond-length correlation supporting a 1/ r2 - dependence.

  16. Solar Spectral Lines with Special Polarization Properties for the Calibration of Instrument Polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, W.; Casini, R.; Alemán, T. del Pino

    We investigate atomic transitions that have previously been identified as having zero polarization from the Zeeman effect. Our goal is to identify spectral lines that can be used for the calibration of instrumental polarization of large astronomical and solar telescopes, such as the Daniel K. Inouye Solar Telescope, which is currently under construction on Haleakala. We use a numerical model that takes into account the generation of scattering polarization and its modification by the presence of a magnetic field of arbitrary strength. We adopt values for the Landé factors from spectroscopic measurements or semi-empirical results, thus relaxing the common assumptionmore » of LS-coupling previously used in the literature. The mechanisms dominating the polarization of particular transitions are identified, and we summarize groups of various spectral lines useful for the calibration of spectropolarimetric instruments, classified according to their polarization properties.« less

  17. Variations of High-Latitude Geomagnetic Pulsation Frequencies: A Comparison of Time-of-Flight Estimates and IMAGE Magnetometer Observations

    NASA Astrophysics Data System (ADS)

    Sandhu, J. K.; Yeoman, T. K.; James, M. K.; Rae, I. J.; Fear, R. C.

    2018-01-01

    The fundamental eigenfrequencies of standing Alfvén waves on closed geomagnetic field lines are estimated for the region spanning 5.9≤L < 9.5 over all MLT (Magnetic Local Time). The T96 magnetic field model and a realistic empirical plasma mass density model are employed using the time-of-flight approximation, refining previous calculations that assumed a relatively simplistic mass density model. An assessment of the implications of using different mass density models in the time-of-flight calculations is presented. The calculated frequencies exhibit dependences on field line footprint magnetic latitude and MLT, which are attributed to both magnetic field configuration and spatial variations in mass density. In order to assess the validity of the time-of-flight calculated frequencies, the estimates are compared to observations of FLR (Field Line Resonance) frequencies. Using IMAGE (International Monitor for Auroral Geomagnetic Effects) ground magnetometer observations obtained between 2001 and 2012, an automated FLR identification method is developed, based on the cross-phase technique. The average FLR frequency is determined, including variations with footprint latitude and MLT, and compared to the time-of-flight analysis. The results show agreement in the latitudinal and local time dependences. Furthermore, with the use of the realistic mass density model in the time-of-flight calculations, closer agreement with the observed FLR frequencies is obtained. The study is limited by the latitudinal coverage of the IMAGE magnetometer array, and future work will aim to extend the ground magnetometer data used to include additional magnetometer arrays.

  18. 26 CFR 1.167(b)-1 - Straight line method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Straight line method. 1.167(b)-1 Section 1.167(b... Straight line method. (a) In general. Under the straight line method the cost or other basis of the... may be reduced to a percentage or fraction. The straight line method may be used in determining a...

  19. Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1969-01-01

    A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.

  20. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  1. A theoretical method for the analysis and design of axisymmetric bodies. [flow distribution and incompressible fluids

    NASA Technical Reports Server (NTRS)

    Beatty, T. D.

    1975-01-01

    A theoretical method is presented for the computation of the flow field about an axisymmetric body operating in a viscous, incompressible fluid. A potential flow method was used to determine the inviscid flow field and to yield the boundary conditions for the boundary layer solutions. Boundary layer effects in the forces of displacement thickness and empirically modeled separation streamlines are accounted for in subsequent potential flow solutions. This procedure is repeated until the solutions converge. An empirical method was used to determine base drag allowing configuration drag to be computed.

  2. THE FORMATION OF IRIS DIAGNOSTICS. VII. THE FORMATION OF THE O i 135.56 NM LINE IN THE SOLAR ATMOSPHERE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Hsiao-Hsuan; Carlsson, Mats, E-mail: h.h.lin@astro.uio.no, E-mail: mats.carlsson@astro.uio.no

    The O i 135.56 nm line is covered by NASA's Interface Region Imaging Spectrograph (IRIS) small explorer mission which studies how the solar atmosphere is energized. We study here the formation and diagnostic potential of this line by means of non-local thermodynamic equilibrium modeling employing both 1D semi-empirical and 3D radiation magnetohydrodynamic models. We study the basic formation mechanisms and derive a quintessential model atom that incorporates essential atomic physics for the formation of the O i 135.56 nm line. This atomic model has 16 levels and describes recombination cascades through highly excited levels by effective recombination rates. The ionizationmore » balance O i/O ii is set by the hydrogen ionization balance through charge exchange reactions. The emission in the O i 135.56 nm line is dominated by a recombination cascade and the line is optically thin. The Doppler shift of the maximum emission correlates strongly with the vertical velocity in its line forming region, which is typically located at 1.0–1.5 Mm height. The total intensity of the line emission is correlated with the square of the electron density. Since the O i 135.56 nm line is optically thin, the width of the emission line is a very good diagnostic of non-thermal velocities. We conclude that the O i 135.56 nm line is an excellent probe of the middle chromosphere, and compliments other powerful chromospheric diagnostics of IRIS such as the Mg ii h and k lines and the C ii lines around 133.5 nm.« less

  3. An empirical model of H2O, CO2 and CO coma distributions and production rates for comet 67P/Churyumov-Gerasimenko based on ROSINA/DFMS measurements and AMPS-DSMC simulations

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team

    2016-10-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.

  4. Machine learning strategies for systems with invariance properties

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Jones, Reese; Templeton, Jeremy

    2016-08-01

    In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.

  5. CLARIFY (Trademark): An On-Line Guide for Revising Technical Prose,

    DTIC Science & Technology

    1983-11-01

    it appears that writers nominalize in an unconscious attempt to make their prose sound significant. Sociolinguistic studies consistently show that...technical information more effectively than a varied mix of long and short sentences. And there are no empirical studies to suggest that varied sentence...earlier studies , especially those that contrast memory and comprehension - 18 - topic" and "agent." New instance,;--that is, elements of new sentences

  6. Estimating the quadratic mean diameters of fine woody debris in forests of the United States

    Treesearch

    Christopher W. Woodall; Vicente J. Monleon

    2010-01-01

    Most fine woody debris (FWD) line-intersect sampling protocols and associated estimators require an approximation of the quadratic mean diameter (QMD) of each individual FWD size class. There is a lack of empirically derived QMDs by FWD size class and species/forest type across the U.S. The objective of this study is to evaluate a technique known as the graphical...

  7. Rogue America: Benevolent Hegemon or Occupying Tyrant?

    DTIC Science & Technology

    2008-05-01

    Johnson, The Sorrows of Empire (New York: Metropolitan Books), 3. 5 Noam Chomsky , Rogue States (Cambridge: South End Press, 2000), 4. 6 For more on...convenience in making their argument. Focusing his attention on the United States, linguistics professor Noam Chomsky limits his rogue state definition to...14. 39 Noam Chomsky , “Rogue States Draw the Usual Line,” The Noam Chomsky Website, May 2001, http://www.chomsky.info/interviews/200105--.htm

  8. Russo-Japanese Territorial Dispute

    DTIC Science & Technology

    2010-04-08

    militarized and used as the means of projecting influence in Asia -Pacific region. Secondly, the Sea of Okhotsk along with the Barents Sea, served as two...and to the middle of Asia . Unidentified national borders between Russia and Japan in Sakhalin Island caused new official talks between countries...in Manchuria, which the Russian Govemn’lent ~sed as a line of communication between eastern provinces of Russian Empire and Middle Asia . Moreover

  9. Structure and Dynamics of the Thermohaline Staircases in the Beaufort Gyre

    DTIC Science & Technology

    2007-09-01

    diffusive layering created by heating a salt gradient from below, after Figure 6 (Kelley 2003) A is the first quasi - stationary interface. B is the...sources Crapper (1975), Turner (1965), and Newell (1984) from Kelley (1990). The solid line is the empirical fit....12 Figure 11. Schematic of Ice...Salinity, Potential Temperature and Density plots show thermohaline xi step characteristics. b) Sound velocity profiles showing the step data

  10. Empirical data and moral theory. A plea for integrated empirical ethics.

    PubMed

    Molewijk, Bert; Stiggelbout, Anne M; Otten, Wilma; Dupuis, Heleen M; Kievit, Job

    2004-01-01

    Ethicists differ considerably in their reasons for using empirical data. This paper presents a brief overview of four traditional approaches to the use of empirical data: "the prescriptive applied ethicists," "the theorists," "the critical applied ethicists," and "the particularists." The main aim of this paper is to introduce a fifth approach of more recent date (i.e. "integrated empirical ethics") and to offer some methodological directives for research in integrated empirical ethics. All five approaches are presented in a table for heuristic purposes. The table consists of eight columns: "view on distinction descriptive-prescriptive sciences," "location of moral authority," "central goal(s)," "types of normativity," "use of empirical data," "method," "interaction empirical data and moral theory," and "cooperation with descriptive sciences." Ethicists can use the table in order to identify their own approach. Reflection on these issues prior to starting research in empirical ethics should lead to harmonization of the different scientific disciplines and effective planning of the final research design. Integrated empirical ethics (IEE) refers to studies in which ethicists and descriptive scientists cooperate together continuously and intensively. Both disciplines try to integrate moral theory and empirical data in order to reach a normative conclusion with respect to a specific social practice. IEE is not wholly prescriptive or wholly descriptive since IEE assumes an interdepence between facts and values and between the empirical and the normative. The paper ends with three suggestions for consideration on some of the future challenges of integrated empirical ethics.

  11. Alternative Approaches to Evaluation in Empirical Microeconomics

    ERIC Educational Resources Information Center

    Blundell, Richard; Dias, Monica Costa

    2009-01-01

    This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…

  12. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  13. An Empirically-derived non-LTE XUV-Visible Spectral Synthesis Model of the M1 V Exoplanet Host Star GJ832

    NASA Astrophysics Data System (ADS)

    Linsky, Jeffrey; Fontenla, Juan; Witbrod, Jesse; France, Kevin

    2016-01-01

    GJ832 (HD 204961) is a nearby M1 V host star with two exoplanets: a Jovian mass planet and a super-Earth. We have obtained near-UV and far-UV spectra of GJ832 with the STIS and COS instruments on HST as part of the Cycle 19 MUSCLES pilot program (France et al. 2013). Our objective is to obtain the first accurate physical model for a representative M-dwarf host star in order to understand the stellar radiative emission at all wavelengths and to infer the radiation environment of their exoplanets that drives their atmospheric photochemistry.We have calculated a full non-LTE model for GJ 832 including the photosphere, chromosphere, transition region, and corona to fit the observed emission lines formed over a wide range of temperatures and the X-ray flux. Our one-dimensional semi-empirical model uses the Solar-Stellar Physical Modelling tools that are an offspring of the tools used by Fontenla and collaborators for computing solar models. For this model of GJ832, we calculate the populations of 52 atoms and ions and 20 molecules with 2 million spectral lines. We find excellent agreement with the observed H-alpha, CaII, MgII, CII, SiIV, CIV, and NV lines. Our model for GJ832 has a temperature minimum in the lower chromosphere much cooler than the Sun and then a steep temperature rise different from the Sun. The different thermal structure of GJ832 compared to the Sun results in the formation regions of the emission lines being different for the two stars. We also compute theradiative cooling rates as a function of height and temperature in the atmosphere of GJ832.This work is supported by grants from STScI to the University of Colorado.

  14. The effect of symmetry on the U L3 NEXAFS of octahedral coordinated uranium(vi)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagus, Paul S.; Nelin, Connie J.; Ilton, Eugene S.

    2017-03-21

    We describe a detailed theoretical analysis of how distortions from ideal cubic or Oh symmetry affect the shape, in particular the width, of the U L3-edge NEXAFS for U(VI) in octahedral coordination. The full-width-half-maximum (FWHM) of the L3-edge white line decreases with increasing distortion from Oh symmetry due to the mixing of symmetry broken t2g and eg components of the excited state U(6d) orbitals. The mixing is allowed because of spin-orbit splitting of the ligand field split 6d orbitals. Especially for higher distortions, it is possible to identify a mixing between one of the t2g and one of the egmore » components, allowed in the double group representation when the spin-orbit interaction is taken into account. This mixing strongly reduces the ligand field splitting, which, in turn, leads to a narrowing of the U L3 white line. However, the effect of this mixing is partially offset by an increase in the covalent anti-bonding character of the highest energy spin-orbit split eg orbital. At higher distortions, mixing overwhelms the increasing anti-bonding character of this orbital which leads to an accelerated decrease in the FWHM with increasing distortion. Additional evidence for the effect of mixing of t2g and eg components is that the FWHM of the white line narrows whether the two axial U-O bond distances shorten or lengthen. Our ab initio theory uses relativistic wavefunctions for cluster models of the structures; empirical or semi-empirical parameters were not used to adjust prediction to experiment. A major advantage is that it provides a transparent approach for determining how the character and extent of the covalent mixing of the relevant U and O orbitals affect the U L3-edge white line.« less

  15. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    PubMed

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  16. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.-C.; Lin, C.-Y.

    2012-04-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  17. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2012-12-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  18. The "Horns" of FK Comae and the Complex Structure of its Outer Atmosphere

    NASA Astrophysics Data System (ADS)

    Saar, Steven H.; Ayres, T. R.; Kashyap, V.

    2014-01-01

    As part of a large multiwavelength campaign (COCOA-PUFS*) to explore magnetic activity in the unusual, single, rapidly rotating giant FK Comae, we have taken a time series of moderate resolution FUV spectra of the star with the COS spectrograph on HST. We find that the star has unusual, time-variable emission profiles in the chromosphere and transition region which show horn-like features. We use simple spatially inhomogeneous models to explain the variable line shapes. Modeling the lower chromospheric Cl I 1351 Å line, we find evidence for a very extended, spatial inhomogeneous outer atmosphere, likely composed of many huge "sling-shot" prominences of cooler material with embedded in a rotationally distended corona. We compare these results with hotter hotter transition region lines (Si IV) and optical spectra of the chromospheric He I D3 line. We also employ the model Cl I profiles, and data-derived empirical models, to fit the complex spectral region around the coronal Fe XXI 1354.1 Å line. We place limits on the flux of this line, and show these limits are consistent with expectations from the observed X-ray spectrum. *Campaign for Observation of the Corona and Outer Atmosphere of the Fast-rotating Star, FK Comae This work was supported by HST grant GO-12376.01-A.

  19. New Fe i Level Energies and Line Identifications from Stellar Spectra. II. Initial Results from New Ultraviolet Spectra of Metal-poor Stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Ruth C.; Kurucz, Robert L.; Ayres, Thomas R., E-mail: peterson@ucolick.org

    2017-04-01

    The Fe i spectrum is critical to many areas of astrophysics, yet many of the high-lying levels remain uncharacterized. To remedy this deficiency, Peterson and Kurucz identified Fe i lines in archival ultraviolet and optical spectra of metal-poor stars, whose warm temperatures favor moderate Fe i excitation. Sixty-five new levels were recovered, with 1500 detectable lines, including several bound levels in the ionization continuum of Fe i. Here, we extend the previous work by identifying 59 additional levels, with 1400 detectable lines, by incorporating new high-resolution UV spectra of warm metal-poor stars recently obtained by the Hubble Space Telescope Imagingmore » Spectrograph. We provide gf values for these transitions, both computed as well as adjusted to fit the stellar spectra. We also expand our spectral calculations to the infrared, confirming three levels by matching high-quality spectra of the Sun and two cool stars in the H -band. The predicted gf values suggest that an additional 3700 Fe i lines should be detectable in existing solar infrared spectra. Extending the empirical line identification work to the infrared would help confirm additional Fe i levels, as would new high-resolution UV spectra of metal-poor turnoff stars below 1900 Å.« less

  20. New Fe I Level Energies and Line Identifications from Stellar Spectra. II. Initial Results from New Ultraviolet Spectra of Metal-poor Stars

    NASA Astrophysics Data System (ADS)

    Peterson, Ruth C.; Kurucz, Robert L.; Ayres, Thomas R.

    2017-04-01

    The Fe I spectrum is critical to many areas of astrophysics, yet many of the high-lying levels remain uncharacterized. To remedy this deficiency, Peterson & Kurucz identified Fe I lines in archival ultraviolet and optical spectra of metal-poor stars, whose warm temperatures favor moderate Fe I excitation. Sixty-five new levels were recovered, with 1500 detectable lines, including several bound levels in the ionization continuum of Fe I. Here, we extend the previous work by identifying 59 additional levels, with 1400 detectable lines, by incorporating new high-resolution UV spectra of warm metal-poor stars recently obtained by the Hubble Space Telescope Imaging Spectrograph. We provide gf values for these transitions, both computed as well as adjusted to fit the stellar spectra. We also expand our spectral calculations to the infrared, confirming three levels by matching high-quality spectra of the Sun and two cool stars in the H-band. The predicted gf values suggest that an additional 3700 Fe I lines should be detectable in existing solar infrared spectra. Extending the empirical line identification work to the infrared would help confirm additional Fe I levels, as would new high-resolution UV spectra of metal-poor turnoff stars below 1900 Å.

  1. Asymptotic Properties of the Sequential Empirical ROC, PPV and NPV Curves Under Case-Control Sampling.

    PubMed

    Koopmeiners, Joseph S; Feng, Ziding

    2011-01-01

    The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.

  2. Asymptotic Properties of the Sequential Empirical ROC, PPV and NPV Curves Under Case-Control Sampling

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2013-01-01

    The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313

  3. Stark broadening of Ca IV spectral lines of astrophysical interest

    NASA Astrophysics Data System (ADS)

    Alonso-Medina, A.; Colón, C.

    2014-12-01

    Ca IV emission lines are under the preview of Solar Ultraviolet Measurements of Emitted Radiation device aboard the Solar and Heliospheric Observatory. Also, lines of the Ca IV in planetary nebulae NGC 7027 were detected with the Short Wavelength Spectrometer on board the Infrared Space Observatory. These facts justify an attempt to provide new spectroscopic parameters of Ca IV. There are no theoretical or experimental Stark broadening data for Ca IV. Using the Griem semi-empirical approach and the COWAN code, we report in this paper calculated values of the Stark broadening parameters for 467 lines of Ca IV. They were calculated using a set of wavefunctions obtained by using Hartree-Fock relativistic calculations. These lines arising from 3s23p4ns (n = 4, 5), 3s23p44p, 3s23p4nd (n = 3, 4) configurations. Stark widths and shifts are presented for an electron density of 1017 cm-3 and temperatures T = 10 000, 20 000 and 50 200 K. As these data cannot be compared to others in the literature, we present an analysis of the different regularities of the values presented in this work.

  4. Hospital Board Oversight of Quality and Patient Safety: A Narrative Review and Synthesis of Recent Empirical Research

    PubMed Central

    Millar, Ross; Mannion, Russell; Freeman, Tim; Davies, Huw TO

    2013-01-01

    Context Recurring problems with patient safety have led to a growing interest in helping hospitals’ governing bodies provide more effective oversight of the quality and safety of their services. National directives and initiatives emphasize the importance of action by boards, but the empirical basis for informing effective hospital board oversight has yet to receive full and careful review. Methods This article presents a narrative review of empirical research to inform the debate about hospital boards’ oversight of quality and patient safety. A systematic and comprehensive search identified 122 papers for detailed review. Much of the empirical work appeared in the last ten years, is from the United States, and employs cross-sectional survey methods. Findings Recent empirical studies linking board composition and processes with patient outcomes have found clear differences between high- and low-performing hospitals, highlighting the importance of strong and committed leadership that prioritizes quality and safety and sets clear and measurable goals for improvement. Effective oversight is also associated with well-informed and skilled board members. External factors (such as regulatory regimes and the publication of performance data) might also have a role in influencing boards, but detailed empirical work on these is scant. Conclusions Health policy debates recognize the important role of hospital boards in overseeing patient quality and safety, and a growing body of empirical research has sought to elucidate that role. This review finds a number of areas of guidance that have some empirical support, but it also exposes the relatively inchoate nature of the field. Greater theoretical and methodological development is required if we are to secure more evidence-informed governance systems and practices that can contribute to safer care. PMID:24320168

  5. Ground Motion Simulation for a Large Active Fault System using Empirical Green's Function Method and the Strong Motion Prediction Recipe - a Case Study of the Noubi Fault Zone -

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Kumamoto, T.; Fujita, M.

    2005-12-01

    The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture propagation. Moreover, it was clarified that the horizontal velocities by assuming the cascade model was underestimated more than one standard deviation of empirical relation by Si and Midorikawa (1999). The scaling and cascade models showed an approximately 6.4-fold difference for the case, in which the rupture started along the southeastern edge of the Umehara Fault at observation point GIF020. This difference is significantly large in comparison with the effect of different rupture starting points, and shows that it is important to base scenario earthquake assumptions on active fault datasets before establishing the source characterization model. The distribution map of seismic intensity for the 1891 Noubi Earthquake also suggests that the synthetic waveforms in the southeastern Noubi Fault zone may be underestimated. Our results indicate that outer fault parameters (e.g., earthquake moment) related to the construction of scenario earthquakes influence strong motion prediction, rather than inner fault parameters such as the rupture starting point. Based on these methods, we will predict strong motion for approximately 140 to 150 km of the Itoigawa-Shizuoka Tectonic Line.

  6. Disease Risk Score (DRS) as a Confounder Summary Method: Systematic Review and Recommendations

    PubMed Central

    Tadrous, Mina; Gagne, Joshua J.; Stürmer, Til; Cadarette, Suzanne M.

    2013-01-01

    Purpose To systematically examine trends and applications of the disease risk score (DRS) as a confounder summary method. Methods We completed a systematic search of MEDLINE and Web of Science® to identify all English language articles that applied DRS methods. We tabulated the number of publications by year and type (empirical application, methodological contribution, or review paper) and summarized methods used in empirical applications overall and by publication year (<2000, ≥2000). Results Of 714 unique articles identified, 97 examined DRS methods and 86 were empirical applications. We observed a bimodal distribution in the number of publications over time, with a peak 1979-1980, and resurgence since 2000. The majority of applications with methodological detail derived DRS using logistic regression (47%), used DRS as a categorical variable in regression (93%), and applied DRS in a non-experimental cohort (47%) or case-control (42%) study. Few studies examined effect modification by outcome risk (23%). Conclusion Use of DRS methods has increased yet remains low. Comparative effectiveness research may benefit from more DRS applications, particularly to examine effect modification by outcome risk. Standardized terminology may facilitate identification, application, and comprehension of DRS methods. More research is needed to support the application of DRS methods, particularly in case-control studies. PMID:23172692

  7. Vibrational Dependence of Line Coupling and Line Mixing in Self-Broadened Parallel Bands of NH3

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Boulet, C.; Tipping, R. H.

    2017-01-01

    Line coupling and line mixing effects have been calculated for several self-broadened NH3 lines in parallel bands involving an excited v2 mode. It is well known that once the v2 mode is excited, the inversion splitting quickly increases as this quantum number increases. In the present study, we have shown that the v2 dependence of the inversion splitting plays a dominant role in the calculated line-shape parameters. For the v2 band with a 36 cm-1 splitting, the intra-doublet couplings practically disappear and for the 2v2 and 2v2 - v2 bands with much higher splitting values, they are completely absent. With respect to the inter-doublet coupling, it becomes the most efficient coupling mechanism for the v2 band, but it is also completely absent for bands with higher v2 quantum numbers. Because line mixing is caused by line coupling, the above conclusions on line coupling are also applicable for line mixing. Concerning the check of our calculated line mixing effects, while the present formalism has well explained the line mixing signatures observed in the v1 band, there are large discrepancies between the measured Rosenkranz mixing parameters and our calculated results for the v2 and 2v2 bands. In order to clarify these discrepancies, we propose to make some new measurements. In addition, we have calculated self-broadened half-widths in the v2 and 2v2 bands and made comparisons with several measurements and with the values listed in HITRAN 2012. In general, the agreements with measurements are very good. In contrast, the agreement with HITRAN 2012 is poor, indicating that the empirical formula used to predict the HITRAN 2012 data has to be updated.

  8. Recent solar extreme ultraviolet irradiance observations and modeling: A review

    NASA Technical Reports Server (NTRS)

    Tobiska, W. Kent

    1993-01-01

    For more than 90 years, solar extreme ultraviolet (EUV) irradiance modeling has progressed from empirical blackbody radiation formulations, through fudge factors, to typically measured irradiances and reference spectra was well as time-dependent empirical models representing continua and line emissions. A summary of recent EUV measurements by five rockets and three satellites during the 1980s is presented along with the major modeling efforts. The most significant reference spectra are reviewed and threee independently derived empirical models are described. These include Hinteregger's 1981 SERF1, Nusinov's 1984 two-component, and Tobiska's 1990/1991/SERF2/EUV91 flux models. They each provide daily full-disk broad spectrum flux values from 2 to 105 nm at 1 AU. All the models depend to one degree or another on the long time series of the Atmosphere Explorer E (AE-E) EUV database. Each model uses ground- and/or space-based proxies to create emissions from solar atmospheric regions. Future challenges in EUV modeling are summarized including the basic requirements of models, the task of incorporating new observations and theory into the models, the task of comparing models with solar-terrestrial data sets, and long-term goals and modeling objectives. By the late 1990s, empirical models will potentially be improved through the use of proposed solar EUV irradiance measurements and images at selected wavelengths that will greatly enhance modeling and predictive capabilities.

  9. Water Planetary and Cometary Atmospheres: H2O/HDO Transmittance and Fluorescence Models

    NASA Technical Reports Server (NTRS)

    Villanueva, G. L.; Mumma, M. J.; Bonev, B. P.; Novak, R. E.; Barber, R. J.; DiSanti, M. A.

    2012-01-01

    We developed a modern methodology to retrieve water (H2O) and deuterated water (HDO) in planetary and cometary atmospheres, and constructed an accurate spectral database that combines theoretical and empirical results. Based on a greatly expanded set of spectroscopic parameters, we built a full non-resonance cascade fluorescence model and computed fluorescence efficiencies for H2O (500 million lines) and HDO (700 million lines). The new line list was also integrated into an advanced terrestrial radiative transfer code (LBLRTM) and adapted to the CO2 rich atmosphere of Mars, for which we adopted the complex Robert-Bonamy formalism for line shapes. We then retrieved water and D/H in the atmospheres of Mars, comet C/2007 WI, and Earth by applying the new formalism to spectra obtained with the high-resolution spectrograph NIRSPEC/Keck II atop Mauna Kea (Hawaii). The new model accurately describes the complex morphology of the water bands and greatly increases the accuracy of the retrieved abundances (and the D/H ratio in water) with respect to previously available models. The new model provides improved agreement of predicted and measured intensities for many H2O lines already identified in comets, and it identifies several unassigned cometary emission lines as new emission lines of H2O. The improved spectral accuracy permits retrieval of more accurate rotational temperatures and production rates for cometary water.

  10. Retrieving hydrological connectivity from empirical causality in karst systems

    NASA Astrophysics Data System (ADS)

    Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier

    2017-04-01

    Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.

  11. Species delimitation using Bayes factors: simulations and application to the Sceloporus scalaris species group (Squamata: Phrynosomatidae).

    PubMed

    Grummer, Jared A; Bryson, Robert W; Reeder, Tod W

    2014-03-01

    Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.

  12. An Empirical Method for Determining the Lunar Gravity Field. Ph.D. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Ferrari, A. J.

    1971-01-01

    A method has been devised to determine the spherical harmonic coefficients of the lunar gravity field. This method consists of a two-step data reduction and estimation process. In the first step, a weighted least-squares empirical orbit determination scheme is applied to Doppler tracking data from lunar orbits to estimate long-period Kepler elements and rates. Each of the Kepler elements is represented by an independent function of time. The long-period perturbing effects of the earth, sun, and solar radiation are explicitly modeled in this scheme. Kepler element variations estimated by this empirical processor are ascribed to the non-central lunar gravitation features. Doppler data are reduced in this manner for as many orbits as are available. In the second step, the Kepler element rates are used as input to a second least-squares processor that estimates lunar gravity coefficients using the long-period Lagrange perturbation equations.

  13. System and method for measuring residual stress

    DOEpatents

    Prime, Michael B.

    2002-01-01

    The present invention is a method and system for determining the residual stress within an elastic object. In the method, an elastic object is cut along a path having a known configuration. The cut creates a portion of the object having a new free surface. The free surface then deforms to a contour which is different from the path. Next, the contour is measured to determine how much deformation has occurred across the new free surface. Points defining the contour are collected in an empirical data set. The portion of the object is then modeled in a computer simulator. The points in the empirical data set are entered into the computer simulator. The computer simulator then calculates the residual stress along the path which caused the points within the object to move to the positions measured in the empirical data set. The calculated residual stress is then presented in a useful format to an analyst.

  14. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    NASA Astrophysics Data System (ADS)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  15. Jet-induced ground effects on a parametric flat-plate model in hover

    NASA Technical Reports Server (NTRS)

    Wardwell, Douglas A.; Hange, Craig E.; Kuhn, Richard E.; Stewart, Vearl R.

    1993-01-01

    The jet-induced forces generated on short takeoff and vertical landing (STOVL) aircraft when in close proximity to the ground can have a significant effect on aircraft performance. Therefore, accurate predictions of these aerodynamic characteristics are highly desirable. Empirical procedures for estimating jet-induced forces during the vertical/short takeoff and landing (V/STOL) portions of the flight envelope are currently limited in accuracy. The jet-induced force data presented significantly add to the current STOVL configurations data base. Further development of empirical prediction methods for jet-induced forces, to provide more configuration diversity and improved overall accuracy, depends on the viability of this STOVL data base. The data base may also be used to validate computational fluid dynamics (CFD) analysis codes. The hover data obtained at the NASA Ames Jet Calibration and Hover Test (JCAHT) facility for a parametric flat-plate model is presented. The model tested was designed to allow variations in the planform aspect ratio, number of jets, nozzle shape, and jet location. There were 31 different planform/nozzle configurations tested. Each configuration had numerous pressure taps installed to measure the pressures on the undersurface of the model. All pressure data along with the balance jet-induced lift and pitching-moment increments are tabulated. For selected runs, pressure data are presented in the form of contour plots that show lines of constant pressure coefficient on the model undersurface. Nozzle-thrust calibrations and jet flow-pressure survey information are also provided.

  16. Measuring Work Environment and Performance in Nursing Homes

    PubMed Central

    Temkin-Greener, Helena; Zheng, Nan (Tracy); Katz, Paul; Zhao, Hongwei; Mukamel, Dana B.

    2008-01-01

    Background Qualitative studies of the nursing home work environment have long suggested that such attributes as leadership and communication may be related to nursing home performance, including residents' outcomes. However, empirical studies examining these relationships have been scant. Objectives This study is designed to: develop an instrument for measuring nursing home work environment and perceived work effectiveness; test the reliability and validity of the instrument; and identify individual and facility-level factors associated with better facility performance. Research Design and Methods The analysis was based on survey responses provided by managers (N=308) and direct care workers (N=7,418) employed in 162 facilities throughout New York State. Exploratory factor analysis, Chronbach's alphas, analysis of variance, and regression models were used to assess instrument reliability and validity. Multivariate regression models, with fixed facility effects, were used to examine factors associated with work effectiveness. Results The reliability and the validity of the survey instrument for measuring work environment and perceived work effectiveness has been demonstrated. Several individual (e.g. occupation, race) and facility characteristics (e.g. management style, workplace conditions, staffing) that are significant predictors of perceived work effectiveness were identified. Conclusions The organizational performance model used in this study recognizes the multidimensionality of the work environment in nursing homes. Our findings suggest that efforts at improving work effectiveness must also be multifaceted. Empirical findings from such a line of research may provide insights for improving the quality of the work environment and ultimately the quality of residents' care. PMID:19330892

  17. Do attitudes and behavior of health care professionals exacerbate health care disparities among immigrant and ethnic minority groups? An integrative literature review.

    PubMed

    Drewniak, Daniel; Krones, Tanja; Wild, Verina

    2017-05-01

    Recent investigations of ethnicity related disparities in health care have focused on the contribution of providers' implicit biases. A significant effect on health care outcomes is suggested, but the results are mixed. The purpose of this integrative literature review is to provide an overview and synthesize the current empirical research on the potential influence of health care professionals' attitudes and behaviors towards ethnic minority patients on health care disparities. Integrative literature review. Four internet-based literature indexes - MedLine, PsychInfo, Sociological Abstracts and Web of Science - were searched for articles published between 1982 and 2012 discussing health care professionals' attitudes or behaviors towards ethnic minority patients. Thematic analysis was used to synthesize the relevant findings. We found 47 studies from 12 countries. Six potential barriers to health care for ethnic minorities were identified that may be related to health care professionals' attitudes or behaviors: Biases, stereotypes and prejudices; Language and communication barriers; Cultural misunderstandings; Gate-keeping; Statistical discrimination; Specific challenges of delivering care to undocumented migrants. Data on health care professionals' attitudes or behaviors are both limited and inconsistent. We thus provide reflections on methods, conceptualization, interpretation and the importance of the geographical or socio-political settings of potential studies. More empirical data is needed, especially on health care professionals' attitudes or behaviors towards (irregular) migrant patients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Improved inland water levels from SAR altimetry using novel empirical and physical retrackers

    NASA Astrophysics Data System (ADS)

    Villadsen, Heidi; Deng, Xiaoli; Andersen, Ole B.; Stenseng, Lars; Nielsen, Karina; Knudsen, Per

    2016-06-01

    Satellite altimetry has proven a valuable resource of information on river and lake levels where in situ data are sparse or non-existent. In this study several new methods for obtaining stable inland water levels from CryoSat-2 Synthetic Aperture Radar (SAR) altimetry are presented and evaluated. In addition, the possible benefits from combining physical and empirical retrackers are investigated. The retracking methods evaluated in this paper include the physical SAR Altimetry MOde Studies and Applications (SAMOSA3) model, a traditional subwaveform threshold retracker, the proposed Multiple Waveform Persistent Peak (MWaPP) retracker, and a method combining the physical and empirical retrackers. Using a physical SAR waveform retracker over inland water has not been attempted before but shows great promise in this study. The evaluation is performed for two medium-sized lakes (Lake Vänern in Sweden and Lake Okeechobee in Florida), and in the Amazon River in Brazil. Comparing with in situ data shows that using the SAMOSA3 retracker generally provides the lowest root-mean-squared-errors (RMSE), closely followed by the MWaPP retracker. For the empirical retrackers, the RMSE values obtained when comparing with in situ data in Lake Vänern and Lake Okeechobee are in the order of 2-5 cm for well-behaved waveforms. Combining the physical and empirical retrackers did not offer significantly improved mean track standard deviations or RMSEs. Based on these studies, it is suggested that future SAR derived water levels are obtained using the SAMOSA3 retracker whenever information about other physical properties apart from range is desired. Otherwise we suggest using the empirical MWaPP retracker described in this paper, which is both easy to implement, computationally efficient, and gives a height estimate for even the most contaminated waveforms.

  19. Collaboration in Culturally Responsive Therapy: Establishing A Strong Therapeutic Alliance Across Cultural Lines

    PubMed Central

    Asnaani, Anu; Hofmann, Stefan G.

    2012-01-01

    Achieving effectiveness of therapeutic interventions across a diversity of patients continues to be a foremost concern of clinicians and clinical researchers alike. Further, across theoretical orientations and in all treatment modalities, therapy alliance remains a critical component to determine such favorable outcome from therapy. Yet, there remains a scarcity of empirical data testing specific features that most readily facilitate effective collaboration in a multi-cultural therapy relationship. This article reviews the literature on terminology, empirical findings, and features to enhance collaboration in multi-cultural therapy, suggesting guidelines for achieving this goal in therapy with patients (and therapists) of various cultural/racial backgrounds. This is followed by a multi-cultural case study presenting with several co-morbid Axis I disorders, to exemplify the application of these guidelines over the course of therapy. PMID:23616299

  20. What ethics for case managers? Literature review and discussion.

    PubMed

    Corvol, Aline; Moutel, Grégoire; Somme, Dominique

    2016-11-01

    Little is known about case managers' ethical issues and professional values. This article presents an overview of ethical issues in case managers' current practice. Findings are examined in the light of nursing ethics, social work ethics and principle-based biomedical ethics. A systematic literature review was performed to identify and analyse empirical studies concerning ethical issues in case management programmes. It was completed by systematic content analysis of case managers' national codes of ethics. Only nine empirical studies were identified, eight of them from North America. The main dilemmas were how to balance system goals against the client's interest and client protection against autonomy. Professional codes of ethics shared important similarities, but offered different responses to these two dilemmas. We discuss the respective roles of professional and organizational ethics. Further lines of research are suggested. © The Author(s) 2015.

  1. Spatial estimation from remotely sensed data via empirical Bayes models

    NASA Technical Reports Server (NTRS)

    Hill, J. R.; Hinkley, D. V.; Kostal, H.; Morris, C. N.

    1984-01-01

    Multichannel satellite image data, available as LANDSAT imagery, are recorded as a multivariate time series (four channels, multiple passovers) in two spatial dimensions. The application of parametric empirical Bayes theory to classification of, and estimating the probability of, each crop type at each of a large number of pixels is considered. This theory involves both the probability distribution of imagery data, conditional on crop types, and the prior spatial distribution of crop types. For the latter Markov models indexed by estimable parameters are used. A broad outline of the general theory reveals several questions for further research. Some detailed results are given for the special case of two crop types when only a line transect is analyzed. Finally, the estimation of an underlying continuous process on the lattice is discussed which would be applicable to such quantities as crop yield.

  2. Testing the Grossman model of medical spending determinants with macroeconomic panel data.

    PubMed

    Hartwig, Jochen; Sturm, Jan-Egbert

    2018-02-16

    Michael Grossman's human capital model of the demand for health has been argued to be one of the major achievements in theoretical health economics. Attempts to test this model empirically have been sparse, however, and with mixed results. These attempts so far relied on using-mostly cross-sectional-micro data from household surveys. For the first time in the literature, we bring in macroeconomic panel data for 29 OECD countries over the period 1970-2010 to test the model. To check the robustness of the results for the determinants of medical spending identified by the model, we include additional covariates in an extreme bounds analysis (EBA) framework. The preferred model specifications (including the robust covariates) do not lend much empirical support to the Grossman model. This is in line with the mixed results of earlier studies.

  3. Sub-pixel accuracy thickness calculation of poultry fillets from scattered laser profiles

    NASA Astrophysics Data System (ADS)

    Jing, Hansong; Chen, Xin; Tao, Yang; Zhu, Bin; Jin, Fenghua

    2005-11-01

    A laser range imaging system based on the triangulation method was designed and implemented for online high-resolution thickness calculation of poultry fillets. A laser pattern was projected onto the surface of the chicken fillet for calculation of the thickness of the meat. Because chicken fillets are relatively loosely-structured material, a laser light easily penetrates the meat, and scattering occurs both at and under the surface. When laser light is scattered under the surface it is reflected back and further blurs the laser line sharpness. To accurately calculate the thickness of the object, the light transportation has to be considered. In the system, the Bidirectional Reflectance Distribution Function (BSSRDF) was used to model the light transportation and the light pattern reflected into the cameras. BSSRDF gives the reflectance of a target as a function of illumination geometry and viewing geometry. Based on this function, an empirical method has been developed and it has been proven that this method can be used to accurately calculate the thickness of the object from a scattered laser profile. The laser range system is designed as a sub-system that complements the X-ray bone inspection system for non-invasive detection of hazardous materials in boneless poultry meat with irregular thickness.

  4. Decoding of the light changes in eclipsing Wolf-Rayet binaries. I. A non-classical approach to the solution of light curves

    NASA Astrophysics Data System (ADS)

    Perrier, C.; Breysacher, J.; Rauw, G.

    2009-09-01

    Aims: We present a technique to determine the orbital and physical parameters of eclipsing eccentric Wolf-Rayet + O-star binaries, where one eclipse is produced by the absorption of the O-star light by the stellar wind of the W-R star. Methods: Our method is based on the use of the empirical moments of the light curve that are integral transforms evaluated from the observed light curves. The optical depth along the line of sight and the limb darkening of the W-R star are modelled by simple mathematical functions, and we derive analytical expressions for the moments of the light curve as a function of the orbital parameters and the key parameters of the transparency and limb-darkening functions. These analytical expressions are then inverted in order to derive the values of the orbital inclination, the stellar radii, the fractional luminosities, and the parameters of the wind transparency and limb-darkening laws. Results: The method is applied to the SMC W-R eclipsing binary HD 5980, a remarkable object that underwent an LBV-like event in August 1994. The analysis refers to the pre-outburst observational data. A synthetic light curve based on the elements derived for the system allows a quality assessment of the results obtained.

  5. On open and closed field line regions in Tsyganenko's field model and their possible associations with horse collar auroras

    NASA Technical Reports Server (NTRS)

    Birn, J.; Hones, E. W., Jr.; Craven, J. D.; Frank, L. A.; Elphinstone, R. D.; Stern, D. P.

    1991-01-01

    The boundary between open and closed field lines is investigated in the empirical Tsyganenko (1987) magnetic field model. All field lines extending to distances beyond -70 R(E), the tailward velocity limit of the Tsyganenko model are defined as open, while all other field lines, which cross the equatorial plane earthward of -70 R(E) and are connected with the earth at both ends, are assumed closed. It is found that this boundary at the surface of the earth, identified as the polar cap boundary, can exhibit the arrowhead shape, pointed toward the sun, which is found in horse collar auroras. For increasing activity levels, the polar cap increases in area and becomes rounder, so that the arrowhead shape is less pronounced. The presence of a net B(y) component can also lead to considerable rounding of the open flux region. The arrowhead shape is found to be closely associated with the increase of B(z) from the midnight region to the flanks of the tail, consistent with a similar increase of the plasma sheet thickness.

  6. Experimental research and numerical simulation on cryogenic line chill-down process

    NASA Astrophysics Data System (ADS)

    Jin, Lingxue; Cho, Hyokjin; Lee, Cheonkyu; Jeong, Sangkwon

    2018-01-01

    The empirical heat transfer correlations are suggested for the fast cool down process of the cryogenic transfer line from room temperature to cryogenic temperature. The correlations include the heat transfer coefficient (HTC) correlations for single-phase gas convection and film boiling regimes, minimum heat flux (MHF) temperature, critical heat flux (CHF) temperature and CHF. The correlations are obtained from the experimental measurements. The experiments are conducted on a 12.7 mm outer diameter (OD), 1.25 mm wall thickness and 7 m long stainless steel horizontal pipe with liquid nitrogen (LN2). The effect of the lengthwise position is verified by measuring the temperature profiles in near the inlet and the outlet of the transfer line. The newly suggested heat transfer correlations are applied to the one-dimensional homogeneous transient model to simulate the cryogenic line chill-down process, and the chill-down time and the cryogen consumption are well predicted in the mass flux range from 26.0 kg/m2 s to 73.6 kg/m2 s through the correlations.

  7. Osteosarcoma tissues and cell lines from patients with differing serum alkaline phosphatase concentrations display minimal differences in gene expression patterns

    PubMed Central

    de Sá Rodrigues, L. C.; Holmes, K. E.; Thompson, V.; Piskun, C. M.; Lana, S. E.; Newton, M. A.; Stein, T. J.

    2016-01-01

    Serum alkaline phosphatase (ALP) concentration is a prognostic factor for osteosarcoma in multiple studies, although its biological significance remains incompletely understood. To determine whether gene expression patterns differed in osteosarcoma from patients with differing serum ALP concentrations, microarray analysis was performed on 18 primary osteosarcoma samples and six osteosarcoma cell lines from dogs with normal and increased serum ALP concentration. No differences in gene expression patterns were noted between tumours or cell lines with differing serum ALP concentration using a gene-specific two-sample t-test. Using a more sensitive empirical Bayes procedure, defective in cullin neddylation 1 domain containing 1 (DCUN1D1) was increased in both the tissue and cell lines of the normal ALP group. Using quantitative PCR (qPCR), differences in DCUN1D1 expression between the two groups failed to reach significance. The homogeneity of gene expression patterns of osteosarcoma associated differing serum ALP concentrations are consistent with previous studies suggesting serum ALP concentration is not associated with intrinsic differences of osteosarcoma cells. PMID:25643733

  8. On the origin of the water vapor continuum absorption within rotational and fundamental vibrational bands

    NASA Astrophysics Data System (ADS)

    Serov, E. A.; Odintsova, T. A.; Tretyakov, M. Yu.; Semenov, V. E.

    2017-05-01

    Analysis of the continuum absorption in water vapor at room temperature within the purely rotational and fundamental ro-vibrational bands shows that a significant part (up to a half) of the observed absorption cannot be explained within the framework of the existing concepts of the continuum. Neither of the two most prominent mechanisms of continuum originating, namely, the far wings of monomer lines and the dimers, cannot reproduce the currently available experimental data adequately. We propose a new approach to developing a physically based model of the continuum. It is demonstrated that water dimers and wings of monomer lines may contribute equally to the continuum within the bands, and their contribution should be taken into account in the continuum model. We propose a physical mechanism giving missing justification for the super-Lorentzian behavior of the intermediate line wing. The qualitative validation of the proposed approach is given on the basis of a simple empirical model. The obtained results are directly indicative of the necessity to reconsider the existing line wing theory and can guide this consideration.

  9. Theoretical hot methane line lists up to T = 2000 K for astrophysical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rey, M.; Tyuterev, Vl. G.; Nikitin, A. V., E-mail: michael.rey@univ-reims.fr

    2014-07-01

    The paper describes the construction of complete sets of hot methane lines based on accurate ab initio potential and dipole moment surfaces and extensive first-principle calculations. Four line lists spanning the [0-5000] cm{sup –1} infrared region were built at T = 500, 1000, 1500, and 2000 K. For each of these four temperatures, we have constructed two versions of line lists: a version for high-resolution applications containing strong and medium lines and a full version appropriate for low-resolution opacity calculations. A comparison with available empirical databases is discussed in detail for both cold and hot bands giving a very goodmore » agreement for line positions, typically <0.1-0.5 cm{sup –1} and ∼5% for intensities of strong lines. Together with numerical tests using various basis sets, this confirms the computational convergence of our results for the most important lines, which is the major issue for theoretical spectra predictions. We showed that transitions with lower state energies up to 14,000 cm{sup –1} could give significant contributions to the methane opacity and have to be systematically taken into account. Our list at 2000 K calculated up to J = 50 contains 11.5 billion transitions for I > 10{sup –29} cm mol{sup –1}. These new lists are expected to be quantitatively accurate with respect to the precision of available and currently planned observations of astrophysical objects with improved spectral resolution.« less

  10. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY11 Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Glen A.; Casella, Andrew M.; Haight, R. C.

    2011-08-01

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than themore » approximately 10% typical of today’s confirmatory assay methods. This document is a progress report for FY2011 collaboration activities. Progress made by the collaboration in FY2011 continues to indicate the promise of LSDS techniques applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model demonstrated the potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space. Similar results were obtained using a perturbation approach developed by LANL. Benchmark measurements have been successfully conducted at LANL and at RPI using their respective LSDS instruments. The ISU and UNLV collaborative effort is focused on the fabrication and testing of prototype fission chambers lined with ultra-depleted 238U and 232Th, and uranium deposition on a stainless steel disc using spiked U3O8 from room temperature ionic liquid was successful, with improving thickness obtained. In FY2012, the collaboration plans a broad array of activities. PNNL will focus on optimizing its empirical model and minimizing its reliance on calibration data, as well continuing efforts on developing an analytical model. Additional measurements are planned at LANL and RPI. LANL measurements will include a Pu sample, which is expected to provide more counts at longer slowing-down times to help identify discrepancies between experimental data and MCNPX simulations. RPI measurements will include the assay of an entire fresh fuel assembly for the study of self-shielding effects as well as the ability to detect diversion by detecting a missing fuel pin in the fuel assembly. The development of threshold neutron sensors will continue, and UNLV will calibrate existing ultra-depleted uranium deposits at ISU.« less

  11. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    PubMed

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  12. Development of an Empirical Methods for Predicting Jet Mixing Noise of Cold Flow Rectangular Jets

    NASA Technical Reports Server (NTRS)

    Russell, James W.

    1999-01-01

    This report presents an empirical method for predicting the jet mixing noise levels of cold flow rectangular jets. The report presents a detailed analysis of the methodology used in development of the prediction method. The empirical correlations used are based on narrow band acoustic data for cold flow rectangular model nozzle tests conducted in the NASA Langley Jet Noise Laboratory. There were 20 separate nozzle test operating conditions. For each operating condition 60 Hz bandwidth microphone measurements were made over a frequency range from 0 to 60,000 Hz. Measurements were performed at 16 polar directivity angles ranging from 45 degrees to 157.5 degrees. At each polar directivity angle, measurements were made at 9 azimuth directivity angles. The report shows the methods employed to remove screech tones and shock noise from the data in order to obtain the jet mixing noise component. The jet mixing noise was defined in terms of one third octave band spectral content, polar and azimuth directivity, and overall power level. Empirical correlations were performed over the range of test conditions to define each of these jet mixing noise parameters as a function of aspect ratio, jet velocity, and polar and azimuth directivity angles. The report presents the method for predicting the overall power level, the average polar directivity, the azimuth directivity and the location and shape of the spectra for jet mixing noise of cold flow rectangular jets.

  13. Disease risk score as a confounder summary method: systematic review and recommendations.

    PubMed

    Tadrous, Mina; Gagne, Joshua J; Stürmer, Til; Cadarette, Suzanne M

    2013-02-01

    To systematically examine trends and applications of the disease risk score (DRS) as a confounder summary method. We completed a systematic search of MEDLINE and Web of Science® to identify all English language articles that applied DRS methods. We tabulated the number of publications by year and type (empirical application, methodological contribution, or review paper) and summarized methods used in empirical applications overall and by publication year (<2000, ≥2000). Of 714 unique articles identified, 97 examined DRS methods and 86 were empirical applications. We observed a bimodal distribution in the number of publications over time, with a peak 1979-1980, and resurgence since 2000. The majority of applications with methodological detail derived DRS using logistic regression (47%), used DRS as a categorical variable in regression (93%), and applied DRS in a non-experimental cohort (47%) or case-control (42%) study. Few studies examined effect modification by outcome risk (23%). Use of DRS methods has increased yet remains low. Comparative effectiveness research may benefit from more DRS applications, particularly to examine effect modification by outcome risk. Standardized terminology may facilitate identification, application, and comprehension of DRS methods. More research is needed to support the application of DRS methods, particularly in case-control studies. Copyright © 2012 John Wiley & Sons, Ltd.

  14. On the nitrogen-induced far-infrared absorption spectra

    NASA Technical Reports Server (NTRS)

    Dore, P.; Filabozzi, A.

    1987-01-01

    The rototranslational absorption spectrum of gaseous N2 is analyzed, considering quadrupolar and hexadecapolar induction mechanisms. The available experimental data are accounted for by using a line-shape analysis in which empirical profiles describe the single-line translational profiles. Thus, a simple procedure is derived that allows the prediction of the N2 spectrum at any temperature. On the basis of the results obtained for the pure gas, a procedure to compute the far-infrared spectrum of the N2-Ar gaseous mixture is also proposed. The good agreement between computed and experimental N2-Ar data indicates that it is possible to predict the far-infrared absorption induced by N2 on the isotropic polarizability of any interacting partner.

  15. Does the Sun Have a Full-Time Chromosphere?

    NASA Astrophysics Data System (ADS)

    Kalkofen, Wolfgang; Ulmschneider, Peter; Avrett, Eugene H.

    1999-08-01

    The successful modeling of the dynamics of H2v bright points in the nonmagnetic chromosphere by Carlsson & Stein gave as a by-product a part-time chromosphere lacking the persistent outward temperature increase of time-average empirical models, which is needed to explain observations of UV emission lines and continua. We discuss the failure of the dynamical model to account for most of the observed chromospheric emission, arguing that their model uses only about 1% of the acoustic energy supplied to the medium. Chromospheric heating requires an additional source of energy in the form of acoustic waves of short period (P<2 minutes), which form shocks and produce the persistent outward temperature increase that can account for the UV emission lines and continua.

  16. A Compound Fault Diagnosis for Rolling Bearings Method Based on Blind Source Separation and Ensemble Empirical Mode Decomposition

    PubMed Central

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  17. Strong Ground Motion Simulation and Source Modeling of the April 1, 2006 Tai-Tung Earthquake Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2010-12-01

    The Tai-Tung earthquake (ML=6.2) occurred at the southeastern part of Taiwan on April 1, 2006. We examine the source model of this event using the observed seismograms by CWBSN at five stations surrounding the source area. An objective estimation method was used to obtain the parameters N and C which are needed for the empirical Green’s function method by Irikura (1986). This method is called “source spectral ratio fitting method” which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tai-Tung mainshock in 2006 was estimated by comparing the observed waveforms with synthetics using empirical Green’s function method. The size of the asperity is about 3.5 km length along the strike direction by 7.0 km width along the dip direction. The rupture started at the left-bottom of the asperity and extended radially to the right-upper direction.

  18. Acquiring and refining CBT skills and competencies: which training methods are perceived to be most effective?

    PubMed

    Bennett-Levy, James; McManus, Freda; Westling, Bengt E; Fennell, Melanie

    2009-10-01

    A theoretical and empirical base for CBT training and supervision has started to emerge. Increasingly sophisticated maps of CBT therapist competencies have recently been developed, and there is evidence that CBT training and supervision can produce enhancement of CBT skills. However, the evidence base suggesting which specific training techniques are most effective for the development of CBT competencies is lacking. This paper addresses the question: What training or supervision methods are perceived by experienced therapists to be most effective for training CBT competencies? 120 experienced CBT therapists rated which training or supervision methods in their experience had been most effective in enhancing different types of therapy-relevant knowledge or skills. In line with the main prediction, it was found that different training methods were perceived to be differentially effective. For instance, reading, lectures/talks and modelling were perceived to be most useful for the acquisition of declarative knowledge, while enactive learning strategies (role-play, self-experiential work), together with modelling and reflective practice, were perceived to be most effective in enhancing procedural skills. Self-experiential work and reflective practice were seen as particularly helpful in improving reflective capability and interpersonal skills. The study provides a framework for thinking about the acquisition and refinement of therapist skills that may help trainers, supervisors and clinicians target their learning objectives with the most effective training strategies.

  19. THE COSMIC RAY EQUATOR FROM DATA OF THE SECOND SOVIET EARTH SATELLITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savenko, I.A.; Shavrin, P.I.; Nesterov, V.Ye.

    1962-11-01

    Determination of the geographical position of the line of minimum intensity of primary cosmic radiation (cosmic ray equator) makes is possible to study the structure of the geomagnetic field and to check theoretical and empirical approximations to this field. The minima of cosmic radiation intensity were determined by the second Soviet spaceship for 22 latitude curves obtained from various crossings in the region of the geographical equator. (W.D.M.)

  20. An Empirically Derived Taxonomy of Organizational Systems

    DTIC Science & Technology

    1985-09-01

    regulation Supply of potential members Share of potential customer market Geographic factors as a handicap Primary sources of income ? Financial condition of...Table 8 (Continued) Variable Item and Source ofData Score Marketing Management 10* Is company currently selling original line or service Yes 1 No 0...years 4 % over 15 years 5 24 Stability of employee job assignments Stable 1 Diverse 0 25 25 Quality demands of market Extremely high 1 High 2 Ordinary

Top